I have run a MapReduce java program and run in Hadoop. I have not set some configuration right and I don't get my error. I tried with various workarounds but I get similar errors repeatedly.
You must make your mapper and reducer public:
public static class AnnualTaxCalculaterMapper
public static class AnnualTaxCalculaterReducer
The Hadoop API cannot access package-private class.
Related
How do you configure unit testing framework to help develop code that is part of AnyLogic agents?
To have a suitable test driven development rhythm, I need to be able to run all tests in a few seconds. I thought of exporting the project as a standalone application (jar) each time, but that's pretty slow.
I thought of trying to write all the code outside AnyLogic in separate classes, but there are many references to built-in AnyLogic classes, as well as various agents. My code would need to refer to these somehow, and I'm not sure how to do that except by writing the code inside AnyLogic.
I wonder if there's a way of adding the test runner as a dependency, and executing that test runner from within AnyLogic.
Does anyone have a setup that works nicely?
This definitely requires some advanced Java, but testing, especially unit testing is too often neglected in building good robust models. I hope this simple example is enough to get you (and lots of other modellers) going.
For Junit testing, we make use of two libraries that you can add as a dependency to your model.
Now there are two main types of logic that you will want to test in simulation models.
Functions in Java classes
Model execution
Type 1: Suppose I have this very simple Java class
public class MyClass {
public MyClass() {
}
public boolean getResult() {
return true;
}
}
And I want to test the function getResult()
I can simply create a new class and create a function that I annotate with the #Test modifier and then also make use of the assertEquals() method, which is standard in junit testing
import org.junit.Test;
import static org.junit.Assert.assertEquals;
public class MyTestClass{
#Test
public void testMyClassFunction1() {
boolean result = new MyClass().getResult();
assertEquals("The value of the test class 1", result, true);
}
Now comes the AnyLogic specific implementation (there are other ways to do this but this is the easiest/most useful, you will see in a minute)
You need to create a custom experiment
Now if you run this from the Run Model button you will get this output
SUCCESS
Run: 1
Failed: 0
You can obviously update and change the output as to your liking
Type 2: Suppose we have this very simple model
And the function getResult() simply returns an int of 2.
Now we need to create another custom experiment to run this model
And then we can write a test to run this Custom Experiment and check the result
Simply add the following to your MyTestClass
#Test
public void testMyClassFunction2() {
int result = new SingleRun(null).runExperiment();
assertEquals("Value of a single run", result, 2);
}
And now if you run the RunAllTests customer experiment it will give you this output
SUCCESS
Run: 2
Failed: 0
This is just the beginning, you can read up tons on using junit to your advantage
I'm trying to configure a Spring Boot 1.5.9 project with multiple data sources, of which some are Neo4j.
The version of spring-data-neo4j I'm using is 4.2.9.
My goal is to use a different SessionFactory for different repositories, using a different Configuration class for each.
I've got this all working with Mongo but it seems that, even though the sessionFactoryRef is available on #EnableNeo4jRepositories, it simple does not get acted upon.
Abbreviated version of my configuration, with the general concepts:
#org.springframework.context.annotation.Configuration
#EnableNeo4jRepositories(basePackages = "<repo-package-name>", sessionFactoryRef = NEO4J_SESSIONFACTORY_NAME)
public class MyConfiguration {
protected static final String NEO4J_SESSIONFACTORY_NAME = "mySessionFactory";
#Bean(NEO4J_SESSIONFACTORY_NAME)
public SessionFactory mySessionFactory() {
SessionFactory sessionFactory = ...
// passing entity package corresponding to repository
return sessionFactory;
}
As mentioned, this construct works fine with spring-data-mongodb, however in neo4j it first starts out with an error:
***************************
APPLICATION FAILED TO START
***************************
Description:
A component required a bean named 'getSessionFactory' that could not be found.
Action:
Consider defining a bean named 'getSessionFactory' in your configuration.
Turning on debug in the logger and a look through the code led me to SessionBeanDefinitionRegistrarPostProcessor, that contains the following code to get the sessionFactory:
private static String getSessionFactoryBeanRef(ConfigurableListableBeanFactory beanFactory) {
return beanFactory.containsBeanDefinition("sessionFactory") ? "sessionFactory" : "getSessionFactory";
}
Hmmm... hardcoded names for a bean, no sign of customisability.
I then proceeded to name my bean twice, #Bean("sessionFactory", NEO4J_SESSIONFACTORY_NAME), so the above code would pass.
The application started, but the problem is that the repositories get wired with whatever bean is called sessionFactory, effectively not using the sessionFactoryRef on the annotation.
To test this, I changed the name on the annotation to a non-existing bean and it continued to start (if I do this with the mongo-annotation, the application quits because the bean mentioned in mongoTemplateRef isn't available).
I dug a little deeper and found that, for mongo, it retrieves the bean reference in this class. The equivalent neo4j implementation has no such thing. It could of course be an implementation detail but I wasn't able to find any reference to the sessionFactoryRef attribute other than the annotation and the xml-schema.
There are also other places in the config classes that expect only one SessionFactory to be available.
So, in short, it seems to me that EnableNeo4jRepositories.sessionFactoryRef has no implementation and therefore simple doesn't do anything.
As a result, with the current code a single bean "sessionFactory" must be present and all repositories will be wired with this bean, regardless of the value of sessionFactoryRef.
Anybody else with a similar experience or any idea how to file a bug for this?
I'm trying to test my MapReduce with MRUnit, when I do the integration test, it works. I have some unit test that I want to pass them as well.
My MRUnit driver and MapReduce class are:
MapDriver<ImmutableBytesWritable, Result, ImmutableBytesWritable, KeyValue>
public final class HashMapper extends
TableMapper<ImmutableBytesWritable, KeyValue>
When I define the input I get an error:
mapDriver.withInput(new ImmutableBytesWritable(Bytes
.toBytes("query")), new Result(kvs1));
java.lang.NullPointerException
at org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:73)
at org.apache.hadoop.mrunit.internal.io.Serialization.copy(Serialization.java:91)
at org.apache.hadoop.mrunit.internal.output.MockOutputCollector.collect(MockOutputCollector.java:48)
at org.apache.hadoop.mrunit.internal.mapreduce.AbstractMockContextWrapper$4.answer(AbstractMockContextWrapper.java:90)
at org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:29)
at org.mockito.internal.MockHandler.handle(MockHandler.java:95)
at org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
I guess that it's because it doesn't like the Result and KeyValue object since they're not Writable, but I don't undersntad why the integration tests works then. It was working before with Hbase 0.94 when all these objects implement Writable, now I'm working with HBase 0.96. Any clue how should I use MRUnit here?
With the version HBase 0.96 some classes aren't implement Writable anymore, but people from HBase have created a new serialize classes for them.
So, the solutions is to indicate in the Configuration what classes MRUnit must use:
The property is called io.serializations
The different serializes are:
Result class org.apache.hadoop.hbase.mapreduce.ResultSerialization
KeyValue class org.apache.hadoop.hbase.mapreduce.KeyValueSerialization
Put & Get classes org.apache.hadoop.hbase.mapreduce.MutationSerialization
my problem is, that my code which was working on Java6 does not work any more.
Since my app needs to load jar's at runtime (plugins), i wrote myselt a simple class deriving from URLClassLoader like this
public class MyClassLoader extends java.net.URLClassLoader {
/** Creates a new instance of URLClassLoader */
public MyClassLoader(java.net.URL url)
{
super(new java.net.URL[]{url},ClassLoader.getSystemClassLoader());
}
public void addURL(java.net.URL url)
{
super.addURL(url);
}}
So if i want load a jar, i simply call addURL(pathToJar) and load that class via
Class.forName(myClass, true, myClassLoader)
This worked like a charm running on Java6.
Now i decided to make a self contained java app in Java7.
When I start the app, the jar's also get loaded at runtime, but if there's a class inside which derives from a class that is inside the classpath (not in the plugin jar) i get a ClassCastException.
So i guess something has changed in Java7.
At the moment I'm using Java7_u13 on OSX.
Can any one give me a hint on what I should do, to get the old behaviour back? Searching the net didn't get me any help yet.
Many thanks in advance.
Greetings, -chris-
Meanwhile i found the solution to my problem. I just used the 'wrong' classloader as the parent. Everything now works fine if i replace
super(new java.net.URL[]{url},ClassLoader.getSystemClassLoader());
with
super(new java.net.URL[]{url},MyClassLoader.class.getClassLoader());
Greetings, -chris-
My program needs to interact to a directory (with a hierarchical structure) a lot and I need to test it. Therefore, I need to create a directory (and then create sub dirs and files) during the JUnit and then delete this directory after the test.
Is there a good way to do this conveniently?
Look at the methods on java.io.File. If it isn't a good fit, explain why.
You should create your test directory structure in the BeforeClass/Before JUnit annotated methods and remove them in AfterClass/After (have a look at the JUnit FAQ, e.g. How can I run setUp() and tearDown() code once for all of my tests?).
If java.io.File does not offer all you need to prepare your directory structure have a look at com.google.common.io.Files (google guava) or org.apache.commons.io.FileUtils (apache commons io).
If you can use JUnit 4.7, you can use the TemporaryFolder rule:
#RunWith(JUnit4.class)
public class FooTest {
#Rule
public TemporaryFolder tempFolder = new TemporaryFolder();
#Test
public void doStuffThatTouchesFiles() {
File root = tempFolder.newFolder("root");
MyProgram.setRootTemporaryFolder(root);
... continue your test
}
}
You could also use the Rule in an #Before method. Starting with JUnit 4.9, you will be make the rule field a static, so you could use the rule in a #BeforeClass method.
See this article for details
You can just create a tem directory.
Take a look at How to create a temporary directory/folder in Java?
If you need to remotely create a directory, connect ssh and do a ssh command
Some ssh libs SSH Connection Java