How to use google provided template [pubsub to Datastore]? - google-cloud-platform

I want to use this google provided template which stream data from pubsub to datastore.
https://github.com/GoogleCloudPlatform/DataflowTemplates/blob/master/src/main/java/com/google/cloud/teleport/templates/PubsubToDatastore.java
I follow the step wrote this document.
https://github.com/GoogleCloudPlatform/DataflowTemplates
I pass this step.
mvn clean && mvn compile
But next step, the error occured.
[INFO] --- exec-maven-plugin:1.6.0:java (default-cli) # google-cloud-teleport-java ---
2018-08-17 13:36:19 INFO DataflowRunner:266 - PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 117 files. Enable logging at DEBUG level to see which files wi
ll be staged.
[WARNING]
java.lang.IllegalStateException: Missing required properties: errorTag
at com.google.cloud.teleport.templates.common.AutoValue_DatastoreConverters_WriteJsonEntities$Builder.build(AutoValue_DatastoreConverters_WriteJsonEntities.java:89)
at com.google.cloud.teleport.templates.PubsubToDatastore.main(PubsubToDatastore.java:65)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:282)
at java.lang.Thread.run(Thread.java:748)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 35.348 s
[INFO] Finished at: 2018-08-17T13:36:20+09:00
[INFO] Final Memory: 59M/146M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.6.0:java (default-cli) on project google-cloud-teleport-java: An exception occured while executing the Java class. Missing required propert
ies: errorTag -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Then, I tried DatastoreToPubsub template and GSCTextToDatastore template, and these were successful.
https://github.com/GoogleCloudPlatform/DataflowTemplates/blob/master/src/main/java/com/google/cloud/teleport/templates/DatastoreToPubsub.java
https://github.com/GoogleCloudPlatform/DataflowTemplates/blob/master/src/main/java/com/google/cloud/teleport/templates/TextToDatastore.java
So, I can't understand what is problem.
Where is wrong?
Please give me some advices...
Regards.

Looks like you found a bug in that particular DataflowTemplate whereby the pipeline is not configuring an error path even though a path is required when writing JSON entities. The fix is relatively simple and should be pushed to master shortly. In the meantime, you can get the pipeline working with two changes to the PubsubToDatastore pipeline code.
First modify the code so the PubsubToDatastoreOptions extends the ErrorWriteOptions interface. Your new options declaration should look something similar to the following:
interface PubsubToDatastoreOptions
extends
PipelineOptions,
PubsubReadOptions,
JavascriptTextTransformerOptions,
DatastoreWriteOptions,
ErrorWriteOptions {}
Then modify the code within the main method so that the pipeline configures an error TupleTag and routes any error messages to the LogErrors transform. This will ensure any data which fails to be output to Datastore is captured and stored on GCS. Your new main method should look something similar to the following:
TupleTag<String> errorTag = new TupleTag<String>(){};
Pipeline pipeline = Pipeline.create(options);
pipeline
.apply(PubsubIO.readStrings()
.fromTopic(options.getPubsubReadTopic()))
.apply(TransformTextViaJavascript.newBuilder()
.setFileSystemPath(options.getJavascriptTextTransformGcsPath())
.setFunctionName(options.getJavascriptTextTransformFunctionName())
.build())
.apply(WriteJsonEntities.newBuilder()
.setProjectId(options.getDatastoreWriteProjectId())
.setErrorTag(errorTag)
.build())
.apply(LogErrors.newBuilder()
.setErrorWritePath(options.getErrorWritePath())
.setErrorTag(errorTag)
.build());

Related

Cannot create directory while running `sbt IntegrationTest/test` with HBaseTestingUtility.startMiniDFSCluster

When creating a mini HDFS cluster in integration tests with the help of HBaseTestingUtility.startMiniDFSCluster, tests are running fine in IntelliJ IDEA but fail when running via sbt IntegrationTest/test. An error looks like this:
22:00:38.430 [pool-5-thread-4] WARN o.a.h.hdfs.server.namenode.NameNode - Encountered exception during format:
java.io.IOException: Cannot create directory /Users/jay/foobar/target/test-data/afd8c5d6-29a7-2a60-685a-d1c80c73a9c8/cluster_aa70cf12-8c75-2fd1-5602-e49c7026f79e/dfs/name-0-1/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:361)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:571)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:592)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:185)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1211)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:406)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:233)
at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1071)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:987)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:884)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:798)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:667)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:640)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1129)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1104)
...
...
22:00:38.435 [pool-5-thread-4] ERROR o.apache.hadoop.hdfs.MiniDFSCluster - IOE creating namenodes. Permissions dump:
path '/Users/jay/foobar/target/test-data/afd8c5d6-29a7-2a60-685a-d1c80c73a9c8/cluster_aa70cf12-8c75-2fd1-5602-e49c7026f79e/dfs/data':
absolute:/Users/jay/foobar/target/test-data/afd8c5d6-29a7-2a60-685a-d1c80c73a9c8/cluster_aa70cf12-8c75-2fd1-5602-e49c7026f79e/dfs/data
permissions: ----
...
...
[info] FooIntegrationTest:
[info] bar.foo.FooIntegrationTest *** ABORTED ***
[info] java.io.IOException: Cannot create directory /Users/jay/foobar/target/test-data/afd8c5d6-29a7-2a60-685a-d1c80c73a9c8/cluster_aa70cf12-8c75-2fd1-5602-e49c7026f79e/dfs/name-0-1/current
...
Parallel execution of the test suites was causing the issue.
Set execution to serial in build.sbt:
IntegrationTest / parallelExecution := false, // Embedded HBase is having troubles when parallelled

Quarkus native image build fails with unknown arguments

I am building a quarkus native executable image but it is failing because of unknown argument .
I have used quarkus.native.additional-build-args variable in property file but it is not working.
I am using java 11. Can
SLF4J: Found binding in [jar:file:/home/quarkus/.m2/repository/org/jboss/slf4j/slf4j-jboss-logging/1.2.0.Final/slf4j-jboss-logging-1.2.0.Final.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/quarkus/.m2/repository/ch/qos/logback/logback-classic/1.1.1/logback-classic-1.1.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.jboss.slf4j.JBossLoggerFactory]
[INFO] [org.jboss.threads] JBoss Threads version 3.1.1.Final
[INFO] [io.quarkus.deployment.pkg.steps.JarResultBuildStep] Building native image source jar: /usr/src/app/target/quarkus-test-1.0.0-SNAPSHOT-native-image-source-jar/quarkus-test-1.0.0-SNAPSHOT-runner.jar
[INFO] [io.quarkus.deployment.pkg.steps.NativeImageBuildStep] Building native image from /usr/src/app/target/quarkus-test-1.0.0-SNAPSHOT-native-image-source-jar/quarkus-test-1.0.0-SNAPSHOT-runner.jar
[INFO] [io.quarkus.deployment.pkg.steps.NativeImageBuildStep] Running Quarkus native-image plugin on GraalVM Version 20.1.0 (Java Version 11.0.7)
[INFO] [io.quarkus.deployment.pkg.steps.NativeImageBuildStep] /opt/graalvm/bin/native-image -J-Djava.util.logging.manager=org.jboss.logmanager.LogManager -J-Dsun.nio.ch.maxUpdateArraySize=100 -J-Dvertx.logger-delegate-factory-class-name=io.quarkus.vertx.core.runtime.VertxLogDelegateFactory -J-Dvertx.disableDnsResolver=true -J-Dio.netty.leakDetection.level=DISABLED -J-Dio.netty.allocator.maxOrder=1 -J-Dcom.sun.xml.bind.v2.bytecode.ClassTailor.noOptimize=true -J-Duser.language=en -J-Dfile.encoding=UTF-8 --initialize-at-run-time=com.wealdtech.hawk.HawkClient com.wealdtech.hawk.HawkCredential --allow-incomplete-classpath --initialize-at-build-time= -H:InitialCollectionPolicy=com.oracle.svm.core.genscavenge.CollectionPolicy\$BySpaceAndTime -H:+JNI -jar quarkus-test-1.0.0-SNAPSHOT-runner.jar -H:FallbackThreshold=0 -H:+ReportExceptionStackTraces -H:-AddAllCharsets -H:EnableURLProtocols=http,https --enable-all-security-services -H:NativeLinkerOption=-no-pie --no-server -H:-UseServiceLoaderFeature -H:+StackTrace quarkus-test-1.0.0-SNAPSHOT-runner
Error: Unknown argument: quarkus-test-1.0.0-SNAPSHOT-runner
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 21.640 s
[INFO]
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal io.quarkus:quarkus-maven-plugin:1.9.0.Final:build (default) on project quarkus-test: Failed to build quarkus application: io.quarkus.builder.BuildException: Build failure: Build failed due to errors
[ERROR] [error]: Build step io.quarkus.deployment.pkg.steps.NativeImageBuildStep#build threw an exception: java.lang.RuntimeException: Failed to build native image
[ERROR] at io.quarkus.deployment.pkg.steps.NativeImageBuildStep.build(NativeImageBuildStep.java:307)
[ERROR] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[ERROR] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Do any one has any Idea??
Thank you!!!
I had the same issue, when using quarkus.native.additional-build-args in the applications.properties file. The issues I had was, that I defined multiple packages in the --initialize-at-build-time parameter, separated with a comma as: --initialize-at-build-time=javax.net.ssl,java.security
This needed to be masked as following:
--initialize-at-build-time=javax.net.ssl\\,java.security
Just in case anybody else is losing hair on this and ends up here. My build was failing in a similar way:
building quarkus jar
Error: Unknown argument: <yada, yada...>
looking at my application.properties I had some build args:
quarkus.native.additional-build-args=\
-H:+PrintClassInitialization,\
--report-unsupported-elements-at-runtime,\
--allow-incomplete-classpath,\
--initialize-at-run-time=a.b.c.d\\,\
a.b.c.e\\,\
a.b.c.f
Nothing obvious there then. Looking closer though:
--allow-incomplete-classpath,\
Had 2 white space characters at the end of the line after the ,\ removing the white spaces fixed it. OUCH!

Codenvy C++ Hello World program won't build?

I am starting to try and use an online IDE, so I started with Codenvy. I created a workspace and a project and I typed in the following code for a Hello World program just to test the IDE.
#include <iostream>
int main () {
std::cout << "Hello World!" << std::endl;
return 0;
}
It didn't build correctly. This is what the build log says:
[INFO] Scanning for projects...
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[FATAL] Non-readable POM /projects/Testing-CPP/pom.xml: /projects/Testing-CPP/pom.xml (No such file or directory) #
#
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR] The project (/projects/Testing-CPP/pom.xml) has 1 error
[ERROR] Non-readable POM /projects/Testing-CPP/pom.xml: /projects/Testing-CPP/pom.xml (No such file or directory)
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
Can anyone point me in the right direction to getting the IDE to build and run my code?
It looks like you have your project setup as Java/Maven type so it's looking for a pom.xml and probably trying to run mvn clean install.
Project typing is one of the powerful paradigms in Codenvy and Eclipse Che - it allows projects with specific "types" to assume certain behaviors and auto-setup certain things in the environment. So a Java Maven typed app knows that maven must be installed and can auto-add a build command for mvn clean install since that will work with nearly every Maven app.
Try starting with a clean workspace based on the Codenvy C++ stack and the console-cpp-simple sample application. When you get in the workspace you'll see you have a build command that executes a gcc command.
Then you can import your project from inside the IDE by going to Workspace > Import Project. You can then copy the build command from the sample app and (if necessary) modify it for your app. Once your app compiles you can just deleted the hello world C sample app.
You can also select the project you have now and choose Project > Configuration but that won't necessarily add the right compile command for you.
Your code is all right, so that means there is something wrong with the way you have set up your project, as the error message specifies:
The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR] The project (/projects/Testing-CPP/pom.xml) has 1 error
[ERROR] Non-readable POM /projects/Testing-CPP/pom.xml: /projects/Testing-CPP/pom.xml (No such file or directory)
The following link provides a tutorial on how to make your first Hello World program on CodeEnvy:
Running C++ Hello World in the Cloud - Blog
Go over the insuctions provided; if there is anything they did that you didn't, then that is probably where you went wrong.
Good luck!

Is the use of 'aggregate' following by 'dependsOn' redundant with the same modules?

In SBT is the use of aggregate following dependsOn redundant if they both contain the same sub-modules? According to the documentation it seems so, but I have seen this behavior used before and I don't understand what the benefit is. If a project is defined with dependencies, doesn't that already imply what aggregate does for those same dependencies? I notice that my project build is much slower with the use of this redundant aggregate than without and I'd like to know if I can safely remove it.
lazy val module = sbt.Project(...) dependsOn (foo, bar) aggregate (foo, bar)
OR just...
lazy val module = sbt.Project(...) dependsOn (foo, bar)
I am using SBT 0.13.6
tl;dr aggregate causes the tasks to be executed in the aggregating module and all aggregated one while dependsOn sets a CLASSPATH dependency so the libraries are visible to the aggregateing module (depending on the configuration that's compile aka default in the example).
A sample to demonstrate the differences.
I'm using the following build.sbt (nothing really interesting):
lazy val a = project
lazy val b = project
lazy val c = project dependsOn b aggregate (a,b)
The build defines three modules a, b, and c with the last c project to be an aggregate for a and b. There's the fourth module - an implicit one - that aggregates all the modules a, b, and c.
> projects
[info] In file:/Users/jacek/sandbox/aggregate-dependsOn/
[info] a
[info] * aggregate-dependson
[info] b
[info] c
When I execute a task in an aggreateing module, the task is going to be executed in the aggregated modules.
> compile
[info] Updating {file:/Users/jacek/sandbox/aggregate-dependsOn/}b...
[info] Updating {file:/Users/jacek/sandbox/aggregate-dependsOn/}a...
[info] Updating {file:/Users/jacek/sandbox/aggregate-dependsOn/}aggregate-dependson...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Updating {file:/Users/jacek/sandbox/aggregate-dependsOn/}c...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[success] Total time: 0 s, completed Oct 22, 2014 9:33:20 AM
The same happens when I execute a task in c that will in turn execute it against a and b, but not in the top-level project.
> show c/clean
[info] a/*:clean
[info] ()
[info] b/*:clean
[info] ()
[info] c/*:clean
[info] ()
[success] Total time: 0 s, completed Oct 22, 2014 9:34:26 AM
When a task's executed in a or b, it runs only within the project.
> show a/clean
[info] ()
[success] Total time: 0 s, completed Oct 22, 2014 9:34:43 AM
Whether or not a task is executed in aggregateing projects is controlled by aggregate key scoped to a project and/or task.
> show aggregate
[info] a/*:aggregate
[info] true
[info] b/*:aggregate
[info] true
[info] c/*:aggregate
[info] true
[info] aggregate-dependson/*:aggregate
[info] true
Change it as described in Aggregation:
In the project doing the aggregating, the root project in this case, you can control aggregation per-task. (...) aggregate in update is the aggregate key scoped to the update task.
Below I'm changing the key for c module and clean task so clean is no longer executed in aggregated modules a and b:
> set aggregate in (c, clean) := false
[info] Defining c/*:clean::aggregate
[info] The new value will be used by no settings or tasks.
[info] Reapplying settings...
[info] Set current project to aggregate-dependson (in build file:/Users/jacek/sandbox/aggregate-dependsOn/)
> show c/clean
[info] ()
[success] Total time: 0 s, completed Oct 22, 2014 9:39:13 AM
The other tasks for c are unaffected and still executing a task in c will run it in the aggregate modules:
> show c/libraryDependencies
[info] a/*:libraryDependencies
[info] List(org.scala-lang:scala-library:2.10.4)
[info] b/*:libraryDependencies
[info] List(org.scala-lang:scala-library:2.10.4)
[info] c/*:libraryDependencies
[info] List(org.scala-lang:scala-library:2.10.4)
While aggregate sets a dependency for sbt tasks so they get executed in the other aggregated modules, dependsOn sets a CLASSPATH dependency, i.e. a code in dependsOned module is visible in the dependsOning one (sorry for the "new" words).
Let's assume b has a main object as follows:
object Hello extends App {
println("Hello from B")
}
Save the Hello object to b/hello.scala, i.e. under b module.
Since c was defined to dependsOn b (see build.sbt above), the Hello object is visible in b (because it belongs to the module), but also in c.
> b/run
[info] Running Hello
Hello from B
[success] Total time: 0 s, completed Oct 22, 2014 9:46:44 AM
> c/runMain Hello
[info] Running Hello
Hello from B
[success] Total time: 0 s, completed Oct 22, 2014 9:46:58 AM
(I had to use runMain in c as run alone couldn't see the class that I can't explain).
Trying to run the task in a ends up with java.lang.ClassNotFoundException: Hello since the class is not visible in the module.
> a/runMain Hello
[info] Updating {file:/Users/jacek/sandbox/aggregate-dependsOn/}a...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Running Hello
[error] (run-main-6) java.lang.ClassNotFoundException: Hello
java.lang.ClassNotFoundException: Hello
at java.lang.ClassLoader.findClass(ClassLoader.java:530)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
[trace] Stack trace suppressed: run last a/compile:runMain for the full output.
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last a/compile:runMain for the full output.
[error] (a/compile:runMain) Nonzero exit code: 1
[error] Total time: 0 s, completed Oct 22, 2014 9:48:15 AM
Redefine a to dependsOn b in build.sbt and the exception vanishes.
You should read Multi-project builds in the official documentation.

Java binding for LMDB, need help to refresh LMDB JNI

we are using LMDB within java application.
The Java bindings that are available are 1 year old.
I would like to refresh the LMDBJNI
https://github.com/chirino/lmdbjni
However, the project owner, did not provide any instructions on how to build his project.
So I cannot just clone his git repository, and drop the new version of LMDB ( https://git.gitorious.org/mdb/mdb.git ) c and h files, and get it rebuilt
It seems that underneath LMDB JNI Is using hawkjni, but that's as far as I had gotten.
these are the steps I tried
a) git clone https://github.com/chirino/lmdbjni.git
b) cd lmdbjni; mvn install
It finishes successfully, however the resulting JAR does not have the actual lmdb library compiled.
So my test program fails with
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no lmdbjni64-99-vspmaster-SNAPSHOT in java.library.path, no lmdbjni-99-vspmaster-SNAPSHOT in java.library.path, no lmdbjni in java.library.path]
at org.fusesource.hawtjni.runtime.Library.doLoad(Library.java:182)
at org.fusesource.hawtjni.runtime.Library.load(Library.java:140)
at org.fusesource.lmdbjni.JNI.<clinit>(JNI.java:41)
at org.fusesource.lmdbjni.Env.create(Env.java:42)
at org.fusesource.lmdbjni.Env.<init>(Env.java:36)
at com.db.locrefdcache.App.main(App.java:27)
... 6 more
c) then I figured, I may not just be able to run mvn install for lmdbjni, and instead I need to explicitly build it's 64 bit Linux subsystem
so I did
cd lmdbjni/lmdbjni-linux64
mvn install
There I can I see that its trying to run configure script (generated by autotools), but I get the
...
[INFO] checking lmdb.h usability... no
[INFO] checking lmdb.h presence... no
[INFO] checking for lmdb.h... no
[INFO] configure: error: cannot find headers for lmdb
[INFO] rc: 1
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD ERROR
So what I do not quite understand is whether lmdb files (lmdb.h , mdb.c, midl.h ) need to be explicitly dropped somewhere, or if hawkjni actually needs to be ran prior on them and create some sort of 'intermediate' c and h files that later on get dropped into this build environment.
Update with Compile Error I am getting, when using the LMDBJNI deephacks fork
Reason for recompile: DeepHacks's LMDBJNI project had produced a maven archive for LMDBJNI with latest LMDB, however it was compiled with Java 8 (which we do not yet use). So I need to recompile it with Java 7.
I modified pom.xml and changed Sources 1.8 to 1.7
Then, mvn install -P linux64 produces an error
...
[INFO] [hawtjni:build {execution: default}]
[INFO] Extracting /home/dev01/.m2/repository/org/deephacks/lmdbjni/lmdbjni/0.1.3-SNAPSHOT/lmdbjni-0.1.3-SNAPSHOT-native-src.zip to /home/dev01/devel/3dp/lmdbjni/lmdbjni-linux64/target/native-build-extracted
[INFO] executing: /bin/sh -c make install
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD ERROR
[INFO] ------------------------------------------------------------------------
[INFO] build failed: org.apache.maven.plugin.MojoExecutionException: Make based build did not generate: /home/dev01/devel/3dp/lmdbjni/lmdbjni-linux64/target/native-build/target/lib/liblmdbjni.so
You might have better luck with this fork, which is actively being maintained
https://github.com/deephacks/lmdbjni
They've also provided LMDB itself in Maven, you can see how that was setup here
https://github.com/deephacks/lmdb
It builds fine on my machine with Java 7. Did you provide the correct profile when building the packages? For linux you must use: mvn install -P linux64