How to resolve the error below, I have exported the Hcat-core.jar before running the code, kindly help
java.lang.ClassNotFoundException: Class org.apache.hive.hcatalog.mapreduce.HCatOutputFormat not found
Full Trace:
2016-07-28 20:12:48,465 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1468985268798_44020_000002
2016-07-28 20:12:48,653 WARN [main] org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-07-28 20:12:48,690 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2016-07-28 20:12:48,690 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id { id: 44020 cluster_timestamp: 1468985268798 } attemptId: 2 } keyId: 618886960)
2016-07-28 20:12:48,811 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter.
2016-07-28 20:12:49,675 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config null
2016-07-28 20:12:49,744 INFO [main] org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hive.hcatalog.mapreduce.HCatOutputFormat not found
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hive.hcatalog.mapreduce.HCatOutputFormat not found
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.call(MRAppMaster.java:478)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.call(MRAppMaster.java:458)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1560)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:458)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:377)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1518)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1515)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1448) Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hive.hcatalog.mapreduce.HCatOutputFormat not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195)
at org.apache.hadoop.mapreduce.task.JobContextImpl.getOutputFormatClass(JobContextImpl.java:222)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.call(MRAppMaster.java:474)
... 11 more Caused by: java.lang.ClassNotFoundException: Class org.apache.hive.hcatalog.mapreduce.HCatOutputFormat not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
... 13 more
Related
Im trying to run Spark Structured Streaming job and save checkpoint to Google Storage, I have a couple of jobs, one w/o aggregation works perfectly, but second with aggregations throw exception. I found that someone have similar issues with checkpointing on S3 because S3 doesn't support read after write semantics https://blog.yuvalitzchakov.com/improving-spark-streaming-checkpoint-performance-with-aws-efs/, but GS does and everything should be ok, I will be glad if anybody will share their experience with checkpointing.
val writeToKafka = stream.writeStream
.format("kafka")
.trigger(ProcessingTime(5000))
.option("kafka.bootstrap.servers", "localhost:29092")
.option("topic", "test_topic")
.option("checkpointLocation", "gs://test/check_test/Job1")
.start()
Executor task launch worker for task 1] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.0
[Executor task launch worker for task 1] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 3402a8361b734732
[Executor task launch worker for task 1] INFO org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask - Committed partition 0 (task 1, attempt 0stage 1.0)
[Executor task launch worker for task 1] INFO org.apache.spark.sql.execution.streaming.CheckpointFileManager - Writing atomically to gs://test/check_test/Job1/state/0/0/1.delta using temp file gs://test/check_test/Job1/state/0/0/.1.delta.8a93d644-0d8e-4cb9-82b5-6418b9e63ffd.TID1.tmp
[Executor task launch worker for task 1] ERROR org.apache.spark.TaskContextImpl - Error in TaskCompletionListener
java.lang.NullPointerException
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:261)
at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:193)
at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[Executor task launch worker for task 1] ERROR org.apache.spark.executor.Executor - Exception in task 0.0 in stage 1.0 (TID 1)
org.apache.spark.util.TaskCompletionListenerException: null
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[task-result-getter-1] WARN org.apache.spark.scheduler.TaskSetManager - Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: null
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[task-result-getter-1] ERROR org.apache.spark.scheduler.TaskSetManager - Task 0 in stage 1.0 failed 1 times; aborting job
[task-result-getter-1] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Removed TaskSet 1.0, whose tasks have all completed, from pool
[dag-scheduler-event-loop] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Cancelling stage 1
[dag-scheduler-event-loop] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Killing all running tasks in stage 1: Stage cancelled
[dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - ResultStage 1 (start at Job1.scala:53) failed in 9.863 s due to Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: null
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
[stream execution thread for [id = f130d772-fc9e-4b0f-a81e-942af0741ae9, runId = 7dc1cb33-c5f2-4ebe-8707-251de2503ee1]] INFO org.apache.spark.scheduler.DAGScheduler - Job 0 failed: start at Job1.scala:53, took 20.926657 s
[stream execution thread for [id = f130d772-fc9e-4b0f-a81e-942af0741ae9, runId = 7dc1cb33-c5f2-4ebe-8707-251de2503ee1]] ERROR org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec - Data source writer org.apache.spark.sql.execution.streaming.sources.MicroBatchWriter#228cec9e is aborting.
[stream execution thread for [id = f130d772-fc9e-4b0f-a81e-942af0741ae9, runId = 7dc1cb33-c5f2-4ebe-8707-251de2503ee1]] ERROR org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec - Data source writer org.apache.spark.sql.execution.streaming.sources.MicroBatchWriter#228cec9e aborted.
[stream execution thread for [id = f130d772-fc9e-4b0f-a81e-942af0741ae9, runId = 7dc1cb33-c5f2-4ebe-8707-251de2503ee1]] ERROR org.apache.spark.sql.execution.streaming.MicroBatchExecution - Query [id = f130d772-fc9e-4b0f-a81e-942af0741ae9, runId = 7dc1cb33-c5f2-4ebe-8707-251de2503ee1] terminated with error
org.apache.spark.SparkException: Writing job aborted.
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2Exec.scala:92)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:296)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3384)
at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2783)
at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2783)
at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3365)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3364)
at org.apache.spark.sql.Dataset.collect(Dataset.scala:2783)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5$$anonfun$apply$17.apply(MicroBatchExecution.scala:537)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5.apply(MicroBatchExecution.scala:532)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch(MicroBatchExecution.scala:531)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:198)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: null
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2Exec.scala:64)
... 35 more
Caused by: org.apache.spark.util.TaskCompletionListenerException: null
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Exception in thread "main" org.apache.spark.sql.streaming.StreamingQueryException: Writing job aborted.
=== Streaming Query ===
Identifier: [id = f130d772-fc9e-4b0f-a81e-942af0741ae9, runId = 7dc1cb33-c5f2-4ebe-8707-251de2503ee1]
Current Committed Offsets: {}
Current Available Offsets: {KafkaV2[Subscribe[NormalizedEvents]]: {"NormalizedEvents":{"0":46564}}}
Current State: ACTIVE
Thread State: RUNNABLE
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:295)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Caused by: org.apache.spark.SparkException: Writing job aborted.
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2Exec.scala:92)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:296)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3384)
at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2783)
at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2783)
at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3365)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3364)
at org.apache.spark.sql.Dataset.collect(Dataset.scala:2783)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5$$anonfun$apply$17.apply(MicroBatchExecution.scala:537)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5.apply(MicroBatchExecution.scala:532)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch(MicroBatchExecution.scala:531)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:198)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)
... 1 more
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: null
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2Exec.scala:64)
... 35 more
Caused by: org.apache.spark.util.TaskCompletionListenerException: null
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[Thread-1] INFO org.apache.spark.SparkContext - Invoking stop() from shutdown hook
[Thread-1] INFO org.spark_project.jetty.server.AbstractConnector - Stopped Spark#1ce93c18{HTTP/1.1,[http/1.1]}{0.0.0.0:4041}
[Thread-1] INFO org.apache.spark.ui.SparkUI - Stopped Spark web UI at http://10.25.12.222:4041
[dispatcher-event-loop-0] INFO org.apache.spark.MapOutputTrackerMasterEndpoint - MapOutputTrackerMasterEndpoint stopped!
[Thread-1] INFO org.apache.spark.storage.memory.MemoryStore - MemoryStore cleared
[Thread-1] INFO org.apache.spark.storage.BlockManager - BlockManager stopped
[Thread-1] INFO org.apache.spark.storage.BlockManagerMaster - BlockManagerMaster stopped
[dispatcher-event-loop-1] INFO org.apache.spark.scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint - OutputCommitCoordinator stopped!
[Thread-1] INFO org.apache.spark.SparkContext - Successfully stopped SparkContext
[Thread-1] INFO org.apache.spark.util.ShutdownHookManager - Shutdown hook called
[Thread-1] INFO org.apache.spark.util.ShutdownHookManager - Deleting directory /private/var/folders/_t/7m21x7313gs74_yfv4txsr69b8yh87/T/temporaryReader-75fdf46f-7de0-4ca7-9c77-8bd034e4f5a3
[Thread-1] INFO org.apache.spark.util.ShutdownHookManager - Deleting directory /private/var/folders/_t/7m21x7313gs74_yfv4txsr69b8yh87/T/spark-bde783f1-fa66-420f-87e7-5c1895ab7ccc
Spark Streaming jobs checkpointing to Google Cloud Storage was fixed. This fix will be included in GCS connector 2.1.4 and 2.2.0 releases.
You cannot use GCS as checkpoint store if you make aggregations in your stream, at least in version 2.1.3 (hadoop 2). It's perfectly fine if your transforms doesn't include any groupBy, but if that's the case, you should save your checkpoints in HDFS or something else.
I got the same issue trying to write a stream to GCS in Spark 2.4.4. There is no problem using GCS as writestream, but i got same null pointer exception when using GCS as checkpoint location. As I am running spark over Google Dataproc, i can use dataproc HDFS capabilities of the nodes.
I had to port a code from private cloud to gcs. After some these are the changes that I made in order to run the code
For gcs i setup for dual region and I setup the retention policy for it. (I know it's weird but I found this worked for me). Though I set it up for only one day. You can set up a lifecycle policy as well if you want.
I used OutputMode.Append instead of Update
I replaced agg with flapMapGroupWithState function.
For example here is the sample code
events.withWatermark(eventTime = "timestamp", delayThreshold = configs(waterMarkConst))
.groupBy("timestamp", "name").agg(expr("sum(count) as cnt")).select("timestamp", "name", "cnt").toDF().as[(Timestamp, String, Double)]
.map(record => M(record._2, record._3, record._1))
which was replaced by the following code:
events.withWatermark(eventTime = "timestamp", delayThreshold = configs(waterMarkConst))
.groupByKey(m => m._1 + "." + m._2)
.flatMapGroupsWithState(OutputMode.Append(), GroupStateTimeout.EventTimeTimeout())(updateSentMetricsAggregatedState)
Following is the error when I try to start WSO2 server (on Ubuntu). Its a SQLite Database. Looking at the error, looks like it is DB related. What could have gone wrong? Kindly let me know some guidelines around how I can fix it :
testteam#gmcs:~/gauri/wso2am-2.1.0/bin$ bash wso2server.sh
JAVA_HOME environment variable is set to /usr/lib/jvm/java-8-oracle
CARBON_HOME environment variable is set to /home/testteam/gauri/wso2am-2.1.0
Using Java memory options: -Xms256m -Xmx1024m
[2017-08-21 11:50:23,345] INFO - QpidBundleActivator Setting BundleContext in PluginManager
[2017-08-21 11:50:25,532] INFO - CarbonCoreActivator Starting WSO2 Carbon...
[2017-08-21 11:50:25,532] INFO - CarbonCoreActivator Operating System : Linux 4.10.0-32-generic, amd64
[2017-08-21 11:50:25,532] INFO - CarbonCoreActivator Java Home : /usr/lib/jvm/java-8-oracle/jre
[2017-08-21 11:50:25,532] INFO - CarbonCoreActivator Java Version : 1.8.0_131
[2017-08-21 11:50:25,533] INFO - CarbonCoreActivator Java VM : Java HotSpot(TM) 64-Bit Server VM 25.131-b11,Oracle Corporation
[2017-08-21 11:50:25,533] INFO - CarbonCoreActivator Carbon Home : /home/testteam/gauri/wso2am-2.1.0
[2017-08-21 11:50:25,533] INFO - CarbonCoreActivator Java Temp Dir : /home/testteam/gauri/wso2am-2.1.0/tmp
[2017-08-21 11:50:25,533] INFO - CarbonCoreActivator User : testteam, en-IN, Asia/Kolkata
[2017-08-21 11:50:25,839] WARN - ValidationResultPrinter Carbon is configured to use the default keystore (wso2carbon.jks). To maximize security when deploying to a production environment, configure a new keystore with a unique password in the production server profile.
[2017-08-21 11:50:26,229] INFO - KafkaEventAdapterServiceDS Successfully deployed the Kafka output event adaptor service
[2017-08-21 11:50:26,370] INFO - ManagementModeConfigurationLoader CEP started in Single node mode
[2017-08-21 11:50:26,458] INFO - TemplateDeployerServiceTrackerDS Successfully deployed the execution manager tracker service
[2017-08-21 11:50:30,147] ERROR - DefaultRealm nullType class java.lang.reflect.InvocationTargetException
org.wso2.carbon.user.core.UserStoreException: nullType class java.lang.reflect.InvocationTargetException
at org.wso2.carbon.user.core.common.DefaultRealm.createObjectWithOptions(DefaultRealm.java:401)
at org.wso2.carbon.user.core.common.DefaultRealm.initializeObjects(DefaultRealm.java:222)
at org.wso2.carbon.user.core.common.DefaultRealm.init(DefaultRealm.java:127)
at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:263)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:100)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:113)
at org.wso2.carbon.user.core.internal.Activator.startDeploy(Activator.java:68)
at org.wso2.carbon.user.core.internal.BundleCheckActivator.start(BundleCheckActivator.java:61)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683)
at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381)
at org.eclipse.osgi.framework.internal.core.AbstractBundle.resume(AbstractBundle.java:390)
at org.eclipse.osgi.framework.internal.core.Framework.resumeBundle(Framework.java:1176)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:559)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:544)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.incFWSL(StartLevelManager.java:457)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:243)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:438)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:1)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:340)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.wso2.carbon.user.core.common.DefaultRealm.createObjectWithOptions(DefaultRealm.java:355)
... 22 more
Caused by: org.wso2.carbon.user.core.UserStoreException: Error occurred while checking is existing domain : PRIMARY for tenant : -1234
at org.wso2.carbon.user.core.util.UserCoreUtil.persistDomain(UserCoreUtil.java:839)
at org.wso2.carbon.user.core.common.AbstractUserStoreManager.persistDomain(AbstractUserStoreManager.java:4064)
at org.wso2.carbon.user.core.jdbc.JDBCUserStoreManager.<init>(JDBCUserStoreManager.java:280)
at org.wso2.carbon.user.core.jdbc.JDBCUserStoreManager.<init>(JDBCUserStoreManager.java:222)
... 27 more
Caused by: org.wso2.carbon.user.core.UserStoreException: DB error occurred while checking is existing domain : PRIMARY & tenant id : -1234
at org.wso2.carbon.user.core.util.UserCoreUtil.isExistingDomain(UserCoreUtil.java:991)
at org.wso2.carbon.user.core.util.UserCoreUtil.persistDomain(UserCoreUtil.java:827)
... 30 more
Caused by: org.h2.jdbc.JdbcSQLException: General error: "java.lang.RuntimeException: rowCount expected 7545 got 13 T90.I92" [50000-175]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:332)
at org.h2.message.DbException.get(DbException.java:161)
at org.h2.message.DbException.convert(DbException.java:284)
at org.h2.table.RegularTable.addRow(RegularTable.java:137)
at org.h2.store.PageStore.redo(PageStore.java:1535)
at org.h2.store.PageLog.recover(PageLog.java:318)
at org.h2.store.PageStore.recover(PageStore.java:1371)
at org.h2.store.PageStore.openExisting(PageStore.java:361)
at org.h2.store.PageStore.open(PageStore.java:285)
at org.h2.engine.Database.getPageStore(Database.java:2298)
at org.h2.engine.Database.open(Database.java:626)
at org.h2.engine.Database.openDatabase(Database.java:244)
at org.h2.engine.Database.<init>(Database.java:239)
at org.h2.engine.Engine.openSession(Engine.java:56)
at org.h2.engine.Engine.openSession(Engine.java:160)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:139)
at org.h2.engine.Engine.createSession(Engine.java:122)
at org.h2.engine.Engine.createSession(Engine.java:28)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:323)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:105)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:90)
at org.h2.Driver.connect(Driver.java:73)
at org.apache.tomcat.jdbc.pool.PooledConnection.connectUsingDriver(PooledConnection.java:278)
at org.apache.tomcat.jdbc.pool.PooledConnection.connect(PooledConnection.java:182)
at org.apache.tomcat.jdbc.pool.ConnectionPool.createConnection(ConnectionPool.java:701)
at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:635)
at org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:188)
at org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:127)
at org.wso2.carbon.user.core.util.DatabaseUtil.getDBConnection(DatabaseUtil.java:565)
at org.wso2.carbon.user.core.util.UserCoreUtil.isExistingDomain(UserCoreUtil.java:973)
... 31 more
Caused by: java.lang.RuntimeException: rowCount expected 7545 got 13 T90.I92
at org.h2.message.DbException.throwInternalError(DbException.java:231)
at org.h2.table.RegularTable.checkRowCount(RegularTable.java:168)
at org.h2.table.RegularTable.addRow(RegularTable.java:120)
... 57 more
[2017-08-21 11:50:30,165] ERROR - Activator Cannot start User Manager Core bundle
org.wso2.carbon.user.core.UserStoreException: Cannot initialize the realm.
at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:273)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:100)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:113)
at org.wso2.carbon.user.core.internal.Activator.startDeploy(Activator.java:68)
at org.wso2.carbon.user.core.internal.BundleCheckActivator.start(BundleCheckActivator.java:61)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683)
at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381)
at org.eclipse.osgi.framework.internal.core.AbstractBundle.resume(AbstractBundle.java:390)
at org.eclipse.osgi.framework.internal.core.Framework.resumeBundle(Framework.java:1176)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:559)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:544)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.incFWSL(StartLevelManager.java:457)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:243)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:438)
at org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:1)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:340)
Caused by: org.wso2.carbon.user.core.UserStoreException: nullType class java.lang.reflect.InvocationTargetException
at org.wso2.carbon.user.core.common.DefaultRealm.initializeObjects(DefaultRealm.java:322)
at org.wso2.carbon.user.core.common.DefaultRealm.init(DefaultRealm.java:127)
at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:263)
... 19 more
Caused by: org.wso2.carbon.user.core.UserStoreException: nullType class java.lang.reflect.InvocationTargetException
at org.wso2.carbon.user.core.common.DefaultRealm.createObjectWithOptions(DefaultRealm.java:401)
at org.wso2.carbon.user.core.common.DefaultRealm.initializeObjects(DefaultRealm.java:222)
... 21 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.wso2.carbon.user.core.common.DefaultRealm.createObjectWithOptions(DefaultRealm.java:355)
... 22 more
Caused by: org.wso2.carbon.user.core.UserStoreException: Error occurred while checking is existing domain : PRIMARY for tenant : -1234
at org.wso2.carbon.user.core.util.UserCoreUtil.persistDomain(UserCoreUtil.java:839)
at org.wso2.carbon.user.core.common.AbstractUserStoreManager.persistDomain(AbstractUserStoreManager.java:4064)
at org.wso2.carbon.user.core.jdbc.JDBCUserStoreManager.<init>(JDBCUserStoreManager.java:280)
at org.wso2.carbon.user.core.jdbc.JDBCUserStoreManager.<init>(JDBCUserStoreManager.java:222)
... 27 more
Caused by: org.wso2.carbon.user.core.UserStoreException: DB error occurred while checking is existing domain : PRIMARY & tenant id : -1234
at org.wso2.carbon.user.core.util.UserCoreUtil.isExistingDomain(UserCoreUtil.java:991)
at org.wso2.carbon.user.core.util.UserCoreUtil.persistDomain(UserCoreUtil.java:827)
... 30 more
Caused by: org.h2.jdbc.JdbcSQLException: General error: "java.lang.RuntimeException: rowCount expected 7545 got 13 T90.I92" [50000-175]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:332)
at org.h2.message.DbException.get(DbException.java:161)
at org.h2.message.DbException.convert(DbException.java:284)
at org.h2.table.RegularTable.addRow(RegularTable.java:137)
at org.h2.store.PageStore.redo(PageStore.java:1535)
at org.h2.store.PageLog.recover(PageLog.java:318)
at org.h2.store.PageStore.recover(PageStore.java:1371)
at org.h2.store.PageStore.openExisting(PageStore.java:361)
at org.h2.store.PageStore.open(PageStore.java:285)
at org.h2.engine.Database.getPageStore(Database.java:2298)
at org.h2.engine.Database.open(Database.java:626)
at org.h2.engine.Database.openDatabase(Database.java:244)
at org.h2.engine.Database.<init>(Database.java:239)
at org.h2.engine.Engine.openSession(Engine.java:56)
at org.h2.engine.Engine.openSession(Engine.java:160)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:139)
at org.h2.engine.Engine.createSession(Engine.java:122)
at org.h2.engine.Engine.createSession(Engine.java:28)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:323)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:105)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:90)
at org.h2.Driver.connect(Driver.java:73)
at org.apache.tomcat.jdbc.pool.PooledConnection.connectUsingDriver(PooledConnection.java:278)
at org.apache.tomcat.jdbc.pool.PooledConnection.connect(PooledConnection.java:182)
at org.apache.tomcat.jdbc.pool.ConnectionPool.createConnection(ConnectionPool.java:701)
at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:635)
at org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:188)
at org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:127)
at org.wso2.carbon.user.core.util.DatabaseUtil.getDBConnection(DatabaseUtil.java:565)
at org.wso2.carbon.user.core.util.UserCoreUtil.isExistingDomain(UserCoreUtil.java:973)
... 31 more
Caused by: java.lang.RuntimeException: rowCount expected 7545 got 13 T90.I92
at org.h2.message.DbException.throwInternalError(DbException.java:231)
at org.h2.table.RegularTable.checkRowCount(RegularTable.java:168)
at org.h2.table.RegularTable.addRow(RegularTable.java:120)
... 57 more
[2017-08-21 11:50:53,572] INFO - TaglibUriRule TLD skipped. URI: http://tiles.apache.org/tags-tiles is already defined
You are using API Manager 2.1 with SQLLite database. But, we haven't tested this database against this product. Also, from the logs, it seems you are using H2 database driver class.
Can you use SQLLite database driver class org.sqlite.JDBC and see?
Following is the trace
INFO [Service Thread] 2017-04-03 16:03:12,892 GCInspector.java:284 - ConcurrentMarkSweep GC in 409ms. CMS Old Gen: 735835280 -> 244369824; Code Cache: 19242240 -> 19422784; Metaspace: 33957296 -> 33963456; Par Eden Space: 5530560 -> 20738944; Par Survivor Space: 6324656 -> 10485760
INFO [StreamReceiveTask:4] 2017-04-03 16:03:12,934 SecondaryIndexManager.java:365 - Submitting index build of rank_candidate_english_english,rank_candidate_english_rank for data in BigTableReader(path='/var/lib/cassandra/data/workindia/rank_candidate_english-cb5b37f0178d11e7bd24e7b9b065592d/mc-1-big-Data.db'),BigTableReader(path='/var/lib/cassandra/data/workindia/rank_candidate_english-cb5b37f0178d11e7bd24e7b9b065592d/mc-2-big-Data.db')
ERROR [StreamReceiveTask:2] 2017-04-03 16:04:23,887 StreamSession.java:534 - [Stream #a082a410-1886-11e7-bc70-ef55eb0b43e9] Streaming error occurred
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.NoSuchElementException
at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:402) ~[apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.index.SecondaryIndexManager.buildIndexesBlocking(SecondaryIndexManager.java:373) ~[apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.index.SecondaryIndexManager.buildAllIndexesBlocking(SecondaryIndexManager.java:269) ~[apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:201) ~[apache-cassandra-3.0.12.jar:3.0.12]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_121]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_121]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79) [apache-cassandra-3.0.12.jar:3.0.12]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
Caused by: java.util.concurrent.ExecutionException: java.util.NoSuchElementException
at java.util.concurrent.FutureTask.report(FutureTask.java:122) [na:1.8.0_121]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) [na:1.8.0_121]
at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:398) ~[apache-cassandra-3.0.12.jar:3.0.12]
... 9 common frames omitted
Caused by: java.util.NoSuchElementException: null
at org.apache.cassandra.utils.AbstractIterator.next(AbstractIterator.java:64) ~[apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.index.SecondaryIndexManager.lambda$indexPartition$17(SecondaryIndexManager.java:598) ~[apache-cassandra-3.0.12.jar:3.0.12]
at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_121]
at org.apache.cassandra.index.SecondaryIndexManager.indexPartition(SecondaryIndexManager.java:598) ~[apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.index.SecondaryIndexBuilder.build(SecondaryIndexBuilder.java:68) ~[apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1337) ~[apache-cassandra-3.0.12.jar:3.0.12]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_121]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_121]
... 6 common frames omitted
INFO [StreamReceiveTask:2] 2017-04-03 16:04:24,196 StreamResultFuture.java:183 - [Stream #a082a410-1886-11e7-bc70-ef55eb0b43e9] Session with /172.31.6.131 is complete
WARN [StreamReceiveTask:2] 2017-04-03 16:04:24,197 StreamResultFuture.java:210 - [Stream #a082a410-1886-11e7-bc70-ef55eb0b43e9] Stream failed
ERROR [main] 2017-04-03 16:04:24,207 StorageService.java:1232 - Error while waiting on bootstrap to complete. Bootstrap will have to be restarted.
java.util.concurrent.ExecutionException: org.apache.cassandra.streaming.StreamException: Stream failed
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299) ~[guava-18.0.jar:na]
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286) ~[guava-18.0.jar:na]
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) ~[guava-18.0.jar:na]
at org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1227) [apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:892) [apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:659) [apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:572) [apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:346) [apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569) [apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697) [apache-cassandra-3.0.12.jar:3.0.12]
Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
at org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:211) ~[apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:187) ~[apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:440) ~[apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:540) ~[apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:235) ~[apache-cassandra-3.0.12.jar:3.0.12]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_121]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_121]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79) ~[apache-cassandra-3.0.12.jar:3.0.12]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
WARN [StreamReceiveTask:2] 2017-04-03 16:04:24,212 StorageService.java:1222 - Error during bootstrap.
org.apache.cassandra.streaming.StreamException: Stream failed
at org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85) ~[apache-cassandra-3.0.12.jar:3.0.12]
at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310) [guava-18.0.jar:na]
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457) [guava-18.0.jar:na]
at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156) [guava-18.0.jar:na]
at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145) [guava-18.0.jar:na]
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202) [guava-18.0.jar:na]
at org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:211) [apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:187) [apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:440) [apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:540) [apache-cassandra-3.0.12.jar:3.0.12]
at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:235) [apache-cassandra-3.0.12.jar:3.0.12]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_121]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_121]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79) [apache-cassandra-3.0.12.jar:3.0.12]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_121]
WARN [main] 2017-04-03 16:04:24,250 StorageService.java:944 - Some data streaming failed. Use nodetool to check bootstrap state and resume. For more, see `nodetool help bootstrap`. IN_PROGRESS
INFO [main] 2017-04-03 16:04:24,251 CassandraDaemon.java:656 - Waiting for gossip to settle before accepting client requests...
INFO [main] 2017-04-03 16:04:32,414 CassandraDaemon.java:687 - No gossip backlog; proceeding
INFO [main] 2017-04-03 16:04:33,526 NativeTransportService.java:70 - Netty using native Epoll event loop
INFO [main] 2017-04-03 16:04:34,374 Server.java:159 - Using Netty Version: [netty-buffer=netty-buffer-4.0.44.Final.452812a, netty-codec=netty-codec-4.0.44.Final.452812a, netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, netty-codec-http=netty-codec-http-4.0.44.Final.452812a, netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, netty-common=netty-common-4.0.44.Final.452812a, netty-handler=netty-handler-4.0.44.Final.452812a, netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, netty-transport=netty-transport-4.0.44.Final.452812a, netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a, netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
INFO [main] 2017-04-03 16:04:34,376 Server.java:160 - Starting listening for CQL clients on /0.0.0.0:9042 (unencrypted)...
INFO [main] 2017-04-03 16:04:35,150 CassandraDaemon.java:488 - Not starting RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool (enablethrift) to start it
Follow stuff have been tried:
1. delete data, commitlogs, saved_cache folder and restart
2. nodetool scrub --skip-corrupted and then again tried bootstrapping
3. Have tried multiple stuff
Note: I had a dead node which I removed but I still see that in gossip.
Can that be an issue?
How to resolve it?
I think you are getting this error because of index.
Are you using any custom index ? If yes then make sure the custom index jar is in the new cassandra node. If this not solved your problem then try to drop all the index then join the new node, After joining completed you can recreate all the index
You can check out this link on joining new node :
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddNodeToCluster.html
I am using Apache CXF , for developing Webservices
I have two wars inside my Tomcat server , where from one war i am calling another war
After adding this code (Calling another war from a class using jaxProxyFactory Bean )
the deployment is not happening .
Please let me know whetehr we need to add anything for this inside the configuration files ??
The error is
SEVERE: Error listenerStart
String host = "http://localhost:8080/bayer-ws-1.0/bayer/soap";
JaxWsProxyFactoryBean factory = new JaxWsProxyFactoryBean();
factory.setServiceClass(bayerService.class);
factory.setAddress(host);
bayerService client = (bayerService) factory.create();
client.holdings(request);
Thanks .
This is inside the server logs
Nov 23, 2011 7:05:47 PM org.apache.catalina.core.StandardContext listenerStart
SEVERE: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'bayerSOAP': Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: Lcom/bayer/middleware/zecco/accountRestrictions/AccountRestrictions;
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1302)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:463)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:404)
at java.security.AccessController.doPrivileged(Native Method)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:375)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:263)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:170)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:260)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:184)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:163)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:430)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:729)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:381)
at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:254)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:198)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4206)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4705)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:799)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:779)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:601)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:943)
at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:778)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:504)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1317)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:324)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1065)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:840)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1057)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)
at org.apache.catalina.core.StandardService.start(StandardService.java:525)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:754)
at org.apache.catalina.startup.Catalina.start(Catalina.java:595)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
Caused by: java.lang.NoClassDefFoundError: Lcom/bayer/middleware/zecco/accountRestrictions/AccountRestrictions;
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2291)
at java.lang.Class.getDeclaredFields(Class.java:1743)
at org.apache.cxf.jaxb.JAXBContextInitializer.walkReferences(JAXBContextInitializer.java:268)
at org.apache.cxf.jaxb.JAXBContextInitializer.addClass(JAXBContextInitializer.java:227)
at org.apache.cxf.jaxb.JAXBContextInitializer.addType(JAXBContextInitializer.java:177)
at org.apache.cxf.jaxb.JAXBContextInitializer.addType(JAXBContextInitializer.java:170)
at org.apache.cxf.jaxb.JAXBContextInitializer.walkReferences(JAXBContextInitializer.java:271)
at org.apache.cxf.jaxb.JAXBContextInitializer.addClass(JAXBContextInitializer.java:227)
at org.apache.cxf.jaxb.JAXBContextInitializer.begin(JAXBContextInitializer.java:150)
at org.apache.cxf.service.ServiceModelVisitor.visitOperation(ServiceModelVisitor.java:109)
at org.apache.cxf.service.ServiceModelVisitor.walk(ServiceModelVisitor.java:74)
at org.apache.cxf.jaxb.JAXBDataBinding.initialize(JAXBDataBinding.java:249)
at org.apache.cxf.service.factory.ReflectionServiceFactoryBean.buildServiceFromClass(ReflectionServiceFactoryBean.java:376)
at org.apache.cxf.jaxws.support.JaxWsServiceFactoryBean.buildServiceFromClass(JaxWsServiceFactoryBean.java:525)
at org.apache.cxf.service.factory.ReflectionServiceFactoryBean.initializeServiceModel(ReflectionServiceFactoryBean.java:439)
at org.apache.cxf.service.factory.ReflectionServiceFactoryBean.create(ReflectionServiceFactoryBean.java:195)
at org.apache.cxf.jaxws.support.JaxWsServiceFactoryBean.create(JaxWsServiceFactoryBean.java:164)
at org.apache.cxf.frontend.AbstractWSDLBasedEndpointFactory.createEndpoint(AbstractWSDLBasedEndpointFactory.java:100)
at org.apache.cxf.frontend.ServerFactoryBean.create(ServerFactoryBean.java:117)
at org.apache.cxf.jaxws.JaxWsServerFactoryBean.create(JaxWsServerFactoryBean.java:168)
at org.apache.cxf.jaxws.EndpointImpl.getServer(EndpointImpl.java:346)
at org.apache.cxf.jaxws.EndpointImpl.doPublish(EndpointImpl.java:259)
at org.apache.cxf.jaxws.EndpointImpl.publish(EndpointImpl.java:209)
at org.apache.cxf.jaxws.EndpointImpl.publish(EndpointImpl.java:404)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeCustomInitMethod(AbstractAutowireCapableBeanFactory.java:1378)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1339)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1299)
... 39 more
Caused by: java.lang.ClassNotFoundException: com.bayer.middleware.zecco.accountRestrictions.AccountRestrictions
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1680)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1526)
... 71 more
Nov 23, 2011 7:05:47 PM org.apache.catalina.core.ApplicationContext log
INFO: Closing Spring root WebApplicationContext
Nov 23, 2011 7:05:47 PM org.apache.catalina.core.ApplicationContext log
INFO: ContextListener: contextInitialized()
Nov 23, 2011 7:05:47 PM org.apache.catalina.core.ApplicationContext log
INFO: SessionListener: contextInitialized()
While trying to run a C++ program referring this ( link ) on my hadoop cluster. I got the error mentioned below.
I referred related posts (this) regarding this error, and tried tweaking my Makefile, but still i am unable to resolve this issue.
( I am using fedora c12, i am also not able to locate .configure file to add . )
My Makefile looks like this:
CC = g++
HADOOP_INSTALL = /home/hadoop/Desktop/Cloudera/hadoop-0.20.2-cdh3u1
SSL_INSTALL = /usr/include/openssl
PLATFORM = Linux-i386-32
CPPFLAGS = -m32 -I$(HADOOP_INSTALL)/c++/$(PLATFORM)/include -I$(SSL_INSTALL)
wordcount: wordcount.cpp
$(CC) $(CPPFLAGS) $< -Wall -Wextra -L$(SSL_INSTALL) -lssl -lcrypto -L$(HADOOP_INSTALL)/c++/$(PLATFORM)/lib -lhadooppipes \
-lhadooputils -lpthread -g -O2 -o $#
I am unable to figure out how to resolve this error.
Any help will be appreciated.
hadoop#01HW394491 Desktop]$ hadoop pipes -D hadoop.pipes.java.recordreader=true -D hadoop.pipes.java.recordwriter=true -input /user/hadoop/dtest -output /user/hadoop/dipeshtryWC -program /user/hadoop/dbin/wordcount
11/11/25 15:55:07 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
11/11/25 15:55:07 INFO util.NativeCodeLoader: Loaded the native-hadoop library
11/11/25 15:55:07 WARN snappy.LoadSnappy: Snappy native library not loaded
11/11/25 15:55:07 INFO mapred.FileInputFormat: Total input paths to process : 3
11/11/25 15:55:07 INFO mapred.JobClient: Running job: job_201111161101_0145
11/11/25 15:55:08 INFO mapred.JobClient: map 0% reduce 0%
11/11/25 15:55:14 INFO mapred.JobClient: Task Id : attempt_201111161101_0145_m_000000_0, Status : FAILED
java.io.IOException
at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:188)
at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:194)
at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:149)
at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:68)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
attempt_201111161101_0145_m_000000_0: Server failed to authenticate. Exiting
11/11/25 15:55:14 INFO mapred.JobClient: Task Id : attempt_201111161101_0145_m_000002_0, Status : FAILED
java.io.IOException
at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:188)
at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:194)
at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:149)
at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:68)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
attempt_201111161101_0145_m_000002_0: Server failed to authenticate. Exiting
11/11/25 15:55:15 INFO mapred.JobClient: Task Id : attempt_201111161101_0145_m_000001_0, Status : FAILED
java.io.IOException
at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:188)
at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:194)
at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:149)
at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:68)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
attempt_201111161101_0145_m_000001_0: Server failed to authenticate. Exiting
11/11/25 15:55:19 INFO mapred.JobClient: Task Id : attempt_201111161101_0145_m_000000_1, Status : FAILED
java.io.IOException
at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:188)
at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:194)
at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:149)
at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:68)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
attempt_201111161101_0145_m_000000_1: Server failed to authenticate. Exiting
11/11/25 15:55:19 INFO mapred.JobClient: Task Id : attempt_201111161101_0145_m_000002_1, Status : FAILED
java.io.IOException
at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:188)
at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:194)
at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:149)
at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:68)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
attempt_201111161101_0145_m_000002_1: Server failed to authenticate. Exiting
11/11/25 15:55:19 INFO mapred.JobClient: Task Id : attempt_201111161101_0145_m_000001_1, Status : FAILED
java.io.IOException
at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:188)
at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:194)
at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:149)
at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:68)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
attempt_201111161101_0145_m_000001_1: Server failed to authenticate. Exiting
11/11/25 15:55:24 INFO mapred.JobClient: Task Id : attempt_201111161101_0145_m_000000_2, Status : FAILED
java.io.IOException
at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:188)
at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:194)
at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:149)
at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:68)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
attempt_201111161101_0145_m_000000_2: Server failed to authenticate. Exiting
11/11/25 15:55:24 INFO mapred.JobClient: Task Id : attempt_201111161101_0145_m_000002_2, Status : FAILED
java.io.IOException
at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:188)
at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:194)
at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:149)
at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:68)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
attempt_201111161101_0145_m_000002_2: Server failed to authenticate. Exiting
11/11/25 15:55:25 INFO mapred.JobClient: Task Id : attempt_201111161101_0145_m_000001_2, Status : FAILED
java.io.IOException
at org.apache.hadoop.mapred.pipes.OutputHandler.waitForAuthentication(OutputHandler.java:188)
at org.apache.hadoop.mapred.pipes.Application.waitForAuthentication(Application.java:194)
at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:149)
at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:68)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
attempt_201111161101_0145_m_000001_2: Server failed to authenticate. Exiting
11/11/25 15:55:30 INFO mapred.JobClient: Job complete: job_201111161101_0145
11/11/25 15:55:30 INFO mapred.JobClient: Counters: 8
11/11/25 15:55:30 INFO mapred.JobClient: Job Counters
11/11/25 15:55:30 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=56297
11/11/25 15:55:30 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
11/11/25 15:55:30 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
11/11/25 15:55:30 INFO mapred.JobClient: Rack-local map tasks=9
11/11/25 15:55:30 INFO mapred.JobClient: Launched map tasks=12
11/11/25 15:55:30 INFO mapred.JobClient: Data-local map tasks=3
11/11/25 15:55:30 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
11/11/25 15:55:30 INFO mapred.JobClient: Failed map tasks=1
11/11/25 15:55:30 INFO mapred.JobClient: Job Failed: NA
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1246)
at org.apache.hadoop.mapred.pipes.Submitter.runJob(Submitter.java:248)
at org.apache.hadoop.mapred.pipes.Submitter.run(Submitter.java:479)
at org.apache.hadoop.mapred.pipes.Submitter.main(Submitter.java:494)
[hadoop#01HW394491 Desktop]$