I am struggling for the Faraday incantation to scan an entire DDB table. The following function produces output, but returns far fewer results than the 18M records I know are in the table.
(far/scan
common/client-opts
v2-index/layer-table-name
{:return #{:layer-key :range-key}})
=>
[{:range-key "soil&2015-07-22T15:13:09.101Z&ssurgo&v1", :layer-key "886985&886985"}
{:range-key "soil&2015-07-29T19:20:09.973Z&ssurgo&v1", :layer-key "886985&886985"}
...
{:range-key "veg&2014-05-29T16:16:31.000Z&true-color&v1", :layer-key "1674603&1674603"}
{:range-key "veg&2014-06-14T16:16:39.000Z&abs&v1", :layer-key "1674603&1674603"}]
What can I do to get faraday deal up all the records? The source-code suggests that there is some :last-prim-kvs option but its not clear to me what would go in there? The Primary Key on this DDB table is a composite primary key composed of :layer-key and :range-key.
If it'll fit in memory, this works...
The key to the whole scheme is the getting the opts map set up with a :limit 99 mapping as well as some :span-reqs {:max 1} mapping. The :span-reqs mapping is totally obscure to me, but it seems to be the real driver behind what is conceptually "page size". I have set up a 10 element table like...
;; This only works on the whole table because the table is small!!!!
(far/scan
common/client-opts
"users.robert.kuhar.wtf_far"
{:return #{:part_key :sort_key :note}})
=>
[{:part_key "456", :sort_key "fha.abs", :note "\"456\",\"fha.abs\" created 2016-12-08T21:32:20.789Z."}
{:part_key "456", :sort_key "fha.rank", :note "\"456\",\"fha.rank\" created 2016-12-08T21:32:20.789Z."}
{:part_key "456", :sort_key "fha.raw", :note "\"456\",\"fha.raw\" created 2016-12-08T21:32:20.789Z."}
{:part_key "456", :sort_key "fha.true-color", :note "\"456\",\"fha.true-color\" created 2016-12-08T21:32:20.789Z."}
{:part_key "456", :sort_key "soil.ssurgo", :note "\"456\",\"soil.ssurgo\" created 2016-12-08T21:32:20.789Z."}
{:part_key "123", :sort_key "fha.abs", :note "\"123\",\"fha.abs\" created 2016-12-08T21:24:30.139Z."}
{:part_key "123", :sort_key "fha.rank", :note "\"123\",\"fha.rank\" created 2016-12-08T21:24:30.139Z"}
{:part_key "123", :sort_key "fha.raw", :note "\"123\",\"fha.raw\" created 2016-12-08T21:24:30.139Z."}
{:part_key "123", :sort_key "fha.true-color", :note "\"123\",\"fha.true-color\" created 2016-12-08T21:24:30.139Z."}
{:part_key "123", :sort_key "soil.ssurgo", :note "\"123\",\"soil.ssurgo\" created 2016-12-08T21:24:30.139Z."}]
If I want to move through this 4 elements at a time, the initial call is...
(far/scan
common/client-opts
"users.robert.kuhar.wtf_far"
{:return #{:part_key :sort_key :note}
:limit 4
:span-reqs {:max 1}})
=>
[{:part_key "456", :sort_key "fha.abs", :note "\"456\",\"fha.abs\" created 2016-12-08T21:32:20.789Z."}
{:part_key "456", :sort_key "fha.rank", :note "\"456\",\"fha.rank\" created 2016-12-08T21:32:20.789Z."}
{:part_key "456", :sort_key "fha.raw", :note "\"456\",\"fha.raw\" created 2016-12-08T21:32:20.789Z."}
{:part_key "456", :sort_key "fha.true-color", :note "\"456\",\"fha.true-color\" created 2016-12-08T21:32:20.789Z."}]
An all subsequent calls need to set a :last-prim-kvs {:part_key "xxx" :sort_key "yyy"} into that opts map to tell faraday where to pick up. For the 2nd page the call is like...
(far/scan
common/client-opts
"users.robert.kuhar.wtf_far"
{:return #{:part_key :sort_key :note}
:limit 4
:span-reqs {:max 1}
:last-prim-kvs {:part_key "456" :sort_key "fha.true-color"}})
=>
[{:part_key "456", :sort_key "soil.ssurgo", :note "\"456\",\"soil.ssurgo\" created 2016-12-08T21:32:20.789Z."}
{:part_key "123", :sort_key "fha.abs", :note "\"123\",\"fha.abs\" created 2016-12-08T21:24:30.139Z."}
{:part_key "123", :sort_key "fha.rank", :note "\"123\",\"fha.rank\" created 2016-12-08T21:24:30.139Z"}
{:part_key "123", :sort_key "fha.raw", :note "\"123\",\"fha.raw\" created 2016-12-08T21:24:30.139Z."}]
The last page of my 10 element table is...
(far/scan
common/client-opts
"users.robert.kuhar.wtf_far"
{:return #{:part_key :sort_key :note}
:limit 4
:span-reqs {:max 1}
:last-prim-kvs {:part_key "123" :sort_key "fha.raw"}})
=>
[{:part_key "123", :sort_key "fha.true-color", :note "\"123\",\"fha.true-color\" created 2016-12-08T21:24:30.139Z."}
{:part_key "123", :sort_key "soil.ssurgo", :note "\"123\",\"soil.ssurgo\" created 2016-12-08T21:24:30.139Z."}]
Just 2 elements even though I asked for 4. Trying to far/scan beyond that is always empty.
(far/scan
common/client-opts
"users.robert.kuhar.wtf_far"
{:return #{:part_key :sort_key :note}
:limit 4
:span-reqs {:max 1}
:last-prim-kvs {:part_key "123" :sort_key "soil.ssurgo"}})
=> []
So this does it end-to-end, provided everything will fit in memory.
(loop [accum []
page (far/scan
client-opts
"users.robert.kuhar.wtf_far"
{:limit 4
:span-reqs {:max 1}})]
(if (empty? page)
accum
(let [last-on-page (last page)
last-part-key (:part_key last-on-page)
last-sort-key (:sort_key last-on-page)]
(recur
(into accum page)
(far/scan
client-opts
"users.robert.kuhar.wtf_far"
{:limit 4
:span-reqs {:max 1}
:last-prim-kvs {:part_key last-part-key :sort_key last-sort-key}})))))
=>
[{:part_key "456", :sort_key "fha.abs", :note "\"456\",\"fha.abs\" created 2016-12-08T21:32:20.789Z."}
...
{:part_key "123", :sort_key "soil.ssurgo", :note "\"123\",\"soil.ssurgo\" created 2016-12-08T21:24:30.139Z."}]
I think the sad final answer in the case of "How can I get faraday/scan to walk an entire DynamoDB table?" is that it can't. You need to build it by hand.
Related
Im trying to run Spark Structured Streaming job and save checkpoint to Google Storage, I have a couple of jobs, one w/o aggregation works perfectly, but second with aggregations throw exception. I found that someone have similar issues with checkpointing on S3 because S3 doesn't support read after write semantics https://blog.yuvalitzchakov.com/improving-spark-streaming-checkpoint-performance-with-aws-efs/, but GS does and everything should be ok, I will be glad if anybody will share their experience with checkpointing.
val writeToKafka = stream.writeStream
.format("kafka")
.trigger(ProcessingTime(5000))
.option("kafka.bootstrap.servers", "localhost:29092")
.option("topic", "test_topic")
.option("checkpointLocation", "gs://test/check_test/Job1")
.start()
Executor task launch worker for task 1] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.0
[Executor task launch worker for task 1] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 3402a8361b734732
[Executor task launch worker for task 1] INFO org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask - Committed partition 0 (task 1, attempt 0stage 1.0)
[Executor task launch worker for task 1] INFO org.apache.spark.sql.execution.streaming.CheckpointFileManager - Writing atomically to gs://test/check_test/Job1/state/0/0/1.delta using temp file gs://test/check_test/Job1/state/0/0/.1.delta.8a93d644-0d8e-4cb9-82b5-6418b9e63ffd.TID1.tmp
[Executor task launch worker for task 1] ERROR org.apache.spark.TaskContextImpl - Error in TaskCompletionListener
java.lang.NullPointerException
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.write(GoogleHadoopOutputStream.java:114)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at net.jpountz.lz4.LZ4BlockOutputStream.finish(LZ4BlockOutputStream.java:261)
at net.jpountz.lz4.LZ4BlockOutputStream.close(LZ4BlockOutputStream.java:193)
at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:303)
at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:274)
at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider.org$apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$cancelDeltaFile(HDFSBackedStateStoreProvider.scala:508)
at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.abort(HDFSBackedStateStoreProvider.scala:150)
at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:65)
at org.apache.spark.sql.execution.streaming.state.package$StateStoreOps$$anonfun$1$$anonfun$apply$1.apply(package.scala:64)
at org.apache.spark.TaskContext$$anon$1.onTaskCompletion(TaskContext.scala:131)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:117)
at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:130)
at org.apache.spark.TaskContextImpl$$anonfun$invokeListeners$1.apply(TaskContextImpl.scala:128)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:128)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[Executor task launch worker for task 1] ERROR org.apache.spark.executor.Executor - Exception in task 0.0 in stage 1.0 (TID 1)
org.apache.spark.util.TaskCompletionListenerException: null
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[task-result-getter-1] WARN org.apache.spark.scheduler.TaskSetManager - Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: null
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[task-result-getter-1] ERROR org.apache.spark.scheduler.TaskSetManager - Task 0 in stage 1.0 failed 1 times; aborting job
[task-result-getter-1] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Removed TaskSet 1.0, whose tasks have all completed, from pool
[dag-scheduler-event-loop] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Cancelling stage 1
[dag-scheduler-event-loop] INFO org.apache.spark.scheduler.TaskSchedulerImpl - Killing all running tasks in stage 1: Stage cancelled
[dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - ResultStage 1 (start at Job1.scala:53) failed in 9.863 s due to Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: null
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
[stream execution thread for [id = f130d772-fc9e-4b0f-a81e-942af0741ae9, runId = 7dc1cb33-c5f2-4ebe-8707-251de2503ee1]] INFO org.apache.spark.scheduler.DAGScheduler - Job 0 failed: start at Job1.scala:53, took 20.926657 s
[stream execution thread for [id = f130d772-fc9e-4b0f-a81e-942af0741ae9, runId = 7dc1cb33-c5f2-4ebe-8707-251de2503ee1]] ERROR org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec - Data source writer org.apache.spark.sql.execution.streaming.sources.MicroBatchWriter#228cec9e is aborting.
[stream execution thread for [id = f130d772-fc9e-4b0f-a81e-942af0741ae9, runId = 7dc1cb33-c5f2-4ebe-8707-251de2503ee1]] ERROR org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec - Data source writer org.apache.spark.sql.execution.streaming.sources.MicroBatchWriter#228cec9e aborted.
[stream execution thread for [id = f130d772-fc9e-4b0f-a81e-942af0741ae9, runId = 7dc1cb33-c5f2-4ebe-8707-251de2503ee1]] ERROR org.apache.spark.sql.execution.streaming.MicroBatchExecution - Query [id = f130d772-fc9e-4b0f-a81e-942af0741ae9, runId = 7dc1cb33-c5f2-4ebe-8707-251de2503ee1] terminated with error
org.apache.spark.SparkException: Writing job aborted.
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2Exec.scala:92)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:296)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3384)
at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2783)
at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2783)
at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3365)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3364)
at org.apache.spark.sql.Dataset.collect(Dataset.scala:2783)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5$$anonfun$apply$17.apply(MicroBatchExecution.scala:537)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5.apply(MicroBatchExecution.scala:532)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch(MicroBatchExecution.scala:531)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:198)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: null
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2Exec.scala:64)
... 35 more
Caused by: org.apache.spark.util.TaskCompletionListenerException: null
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Exception in thread "main" org.apache.spark.sql.streaming.StreamingQueryException: Writing job aborted.
=== Streaming Query ===
Identifier: [id = f130d772-fc9e-4b0f-a81e-942af0741ae9, runId = 7dc1cb33-c5f2-4ebe-8707-251de2503ee1]
Current Committed Offsets: {}
Current Available Offsets: {KafkaV2[Subscribe[NormalizedEvents]]: {"NormalizedEvents":{"0":46564}}}
Current State: ACTIVE
Thread State: RUNNABLE
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:295)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Caused by: org.apache.spark.SparkException: Writing job aborted.
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2Exec.scala:92)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:296)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3384)
at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2783)
at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2783)
at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3365)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3364)
at org.apache.spark.sql.Dataset.collect(Dataset.scala:2783)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5$$anonfun$apply$17.apply(MicroBatchExecution.scala:537)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5.apply(MicroBatchExecution.scala:532)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch(MicroBatchExecution.scala:531)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:198)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)
... 1 more
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: null
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2Exec.scala:64)
... 35 more
Caused by: org.apache.spark.util.TaskCompletionListenerException: null
at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
at org.apache.spark.scheduler.Task.run(Task.scala:137)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[Thread-1] INFO org.apache.spark.SparkContext - Invoking stop() from shutdown hook
[Thread-1] INFO org.spark_project.jetty.server.AbstractConnector - Stopped Spark#1ce93c18{HTTP/1.1,[http/1.1]}{0.0.0.0:4041}
[Thread-1] INFO org.apache.spark.ui.SparkUI - Stopped Spark web UI at http://10.25.12.222:4041
[dispatcher-event-loop-0] INFO org.apache.spark.MapOutputTrackerMasterEndpoint - MapOutputTrackerMasterEndpoint stopped!
[Thread-1] INFO org.apache.spark.storage.memory.MemoryStore - MemoryStore cleared
[Thread-1] INFO org.apache.spark.storage.BlockManager - BlockManager stopped
[Thread-1] INFO org.apache.spark.storage.BlockManagerMaster - BlockManagerMaster stopped
[dispatcher-event-loop-1] INFO org.apache.spark.scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint - OutputCommitCoordinator stopped!
[Thread-1] INFO org.apache.spark.SparkContext - Successfully stopped SparkContext
[Thread-1] INFO org.apache.spark.util.ShutdownHookManager - Shutdown hook called
[Thread-1] INFO org.apache.spark.util.ShutdownHookManager - Deleting directory /private/var/folders/_t/7m21x7313gs74_yfv4txsr69b8yh87/T/temporaryReader-75fdf46f-7de0-4ca7-9c77-8bd034e4f5a3
[Thread-1] INFO org.apache.spark.util.ShutdownHookManager - Deleting directory /private/var/folders/_t/7m21x7313gs74_yfv4txsr69b8yh87/T/spark-bde783f1-fa66-420f-87e7-5c1895ab7ccc
Spark Streaming jobs checkpointing to Google Cloud Storage was fixed. This fix will be included in GCS connector 2.1.4 and 2.2.0 releases.
You cannot use GCS as checkpoint store if you make aggregations in your stream, at least in version 2.1.3 (hadoop 2). It's perfectly fine if your transforms doesn't include any groupBy, but if that's the case, you should save your checkpoints in HDFS or something else.
I got the same issue trying to write a stream to GCS in Spark 2.4.4. There is no problem using GCS as writestream, but i got same null pointer exception when using GCS as checkpoint location. As I am running spark over Google Dataproc, i can use dataproc HDFS capabilities of the nodes.
I had to port a code from private cloud to gcs. After some these are the changes that I made in order to run the code
For gcs i setup for dual region and I setup the retention policy for it. (I know it's weird but I found this worked for me). Though I set it up for only one day. You can set up a lifecycle policy as well if you want.
I used OutputMode.Append instead of Update
I replaced agg with flapMapGroupWithState function.
For example here is the sample code
events.withWatermark(eventTime = "timestamp", delayThreshold = configs(waterMarkConst))
.groupBy("timestamp", "name").agg(expr("sum(count) as cnt")).select("timestamp", "name", "cnt").toDF().as[(Timestamp, String, Double)]
.map(record => M(record._2, record._3, record._1))
which was replaced by the following code:
events.withWatermark(eventTime = "timestamp", delayThreshold = configs(waterMarkConst))
.groupByKey(m => m._1 + "." + m._2)
.flatMapGroupsWithState(OutputMode.Append(), GroupStateTimeout.EventTimeTimeout())(updateSentMetricsAggregatedState)
I am trying to connect to Repl in Clojure project in Light Table. I went to connections, chose project.clj I wanted to connect to but unfortunately without success. I created project with "lein new app my-app". Before this, I had tried to connect with some another project that I had created with Luminus template and it was successfully. But when I made this simple app with "lein new app my-app" I cant connect. I got the following error:
We couldn't connect.
Looks like there was an issue trying to connect to the project. Here's what we got:
final project: {:description FIXME: write description, :compile-path C:\Users\nenad\Desktop\my-first-neural-network\target\base+system+user+dev+8ddc75d4\classes, :deploy-repositories [[clojars {:url https://clojars.org/repo/, :password :gpg, :username :gpg}]], :group my-first-neural-network, :license {:name EPL-2.0 OR GPL-2.0-or-later WITH Classpath-exception-2.0, :url https://www.eclipse.org/legal/epl-2.0/}, :java-cmd C:\Program Files\Java\jdk1.8.0_201\bin\java.exe, :resource-paths (C:\Users\nenad\Desktop\my-first-neural-network\dev-resources C:\Users\nenad\Desktop\my-first-neural-network\resources), :uberjar-merge-with {META-INF/plexus/components.xml leiningen.uberjar/components-merger, data_readers.clj leiningen.uberjar/clj-map-merger, #"META-INF/services/.*" [clojure.core/slurp (fn* [p1__953__955__auto__ p2__954__956__auto__] (clojure.core/str p1__953__955__auto__
p2__954__956__auto__)) clojure.core/spit]}, :name my-first-neural-network, :checkout-deps-shares [:source-paths :test-paths :resource-paths :compile-path #'leiningen.core.classpath/checkout-deps-paths], :source-paths (C:\Users\nenad\Desktop\my-first-neural-network\src), :eval-in :subprocess, :repositories [[central {:url https://repo1.maven.org/maven2/, :snapshots false}] [clojars {:url https://clojars.org/repo/}]], :test-paths (C:\Users\nenad\Desktop\my-first-neural-network\test), :target-path C:\Users\nenad\Desktop\my-first-neural-network\target\base+system+user+dev+8ddc75d4, :prep-tasks [javac compile], :native-path C:\Users\nenad\Desktop\my-first-neural-network\target\base+system+user+dev+8ddc75d4\native, :offline? false, :root C:\Users\nenad\Desktop\my-first-neural-network, :pedantic? ranges, :clean-targets [:target-path], :plugins [], :url http://example.com/FIXME, :profiles {:uberjar {:aot [:all], :jvm-opts nil, :eval-in nil}}, :plugin-repositories [[central {:url https://repo1.maven.org/maven2/, :snapshots false}] [clojars {:url https://clojars.org/repo/}]], :version 0.1.0-SNAPSHOT, :jar-exclusions [#"^\."], :main my-first-neural-network.core, :global-vars {}, :uberjar-exclusions [#"(?i)^META-INF/[^/]*\.(SF|RSA|DSA)$"], :jvm-opts [], :dependencies ([org.clojure/clojure 1.10.0] [org.clojure/tools.nrepl 0.2.10 :exclusions ([org.clojure/clojure])] [clojure-complete/clojure-complete 0.2.3 :exclusions ([org.clojure/clojure])] [lein-light-nrepl/lein-light-nrepl 0.3.3] [lein-light-nrepl-instarepl/lein-light-nrepl-instarepl 0.3.1]), :release-tasks [[vcs assert-committed] [change version leiningen.release/bump-version release] [vcs commit] [vcs tag] [deploy] [change version leiningen.release/bump-version] [vcs commit] [vcs push]], :repl-options {:nrepl-middleware [lighttable.nrepl.handler/lighttable-ops], :init (clojure.core/swap! lighttable.nrepl.core/my-settings clojure.core/merge {:name my-first-neural-network 0.1.0-SNAPSHOT, :project (quote {:description FIXME: write description, :compile-path C:\Users\nenad\Desktop\my-first-neural-network\target\base+system+user+dev\classes, :deploy-repositories [[clojars {:url https://clojars.org/repo/, :password :gpg, :username :gpg}]], :group my-first-neural-network, :license {:name EPL-2.0 OR GPL-2.0-or-later WITH Classpath-exception-2.0, :url https://www.eclipse.org/legal/epl-2.0/}, :java-cmd C:\Program Files\Java\jdk1.8.0_201\bin\java.exe, :resource-paths (C:\Users\nenad\Desktop\my-first-neural-network\dev-resources C:\Users\nenad\Desktop\my-first-neural-network\resources), :uberjar-merge-with {META-INF/plexus/components.xml leiningen.uberjar/components-merger, data_readers.clj leiningen.uberjar/clj-map-merger, #"META-INF/services/.*" [clojure.core/slurp (fn* [p1__953__955__auto__ p2__954__956__auto__] (clojure.core/str p1__953__955__auto__
p2__954__956__auto__)) clojure.core/spit]}, :name my-first-neural-network, :checkout-deps-shares [:source-paths :test-paths :resource-paths :compile-path #'leiningen.core.classpath/checkout-deps-paths], :source-paths (C:\Users\nenad\Desktop\my-first-neural-network\src), :eval-in :subprocess, :repositories [[central {:url https://repo1.maven.org/maven2/, :snapshots false}] [clojars {:url https://clojars.org/repo/}]], :test-paths (C:\Users\nenad\Desktop\my-first-neural-network\test), :target-path C:\Users\nenad\Desktop\my-first-neural-network\target\base+system+user+dev, :prep-tasks [javac compile], :native-path C:\Users\nenad\Desktop\my-first-neural-network\target\base+system+user+dev\native, :offline? false, :root C:\Users\nenad\Desktop\my-first-neural-network, :pedantic? ranges, :clean-targets [:target-path], :plugins [], :url http://example.com/FIXME, :profiles {:uberjar {:aot [:all], :jvm-opts nil, :eval-in nil}}, :plugin-repositories [[central {:url https://repo1.maven.org/maven2/, :snapshots false}] [clojars {:url https://clojars.org/repo/}]], :version 0.1.0-SNAPSHOT, :jar-exclusions [#"^\."], :main my-first-neural-network.core, :global-vars {}, :uberjar-exclusions [#"(?i)^META-INF/[^/]*\.(SF|RSA|DSA)$"], :jvm-opts [], :dependencies ([org.clojure/clojure 1.10.0] [org.clojure/tools.nrepl 0.2.10 :exclusions ([org.clojure/clojure])] [clojure-complete/clojure-complete 0.2.3 :exclusions ([org.clojure/clojure])]), :release-tasks [[vcs assert-committed] [change version leiningen.release/bump-version release] [vcs commit] [vcs tag] [deploy] [change version leiningen.release/bump-version] [vcs commit] [vcs push]], :test-selectors {:default (constantly true)}})})}, :test-selectors {:default (constantly true)}}
Error loading lighttable.nrepl.handler: Syntax error macroexpanding clojure.core/ns at (cljs/source_map/base64_vlq.clj:1:1).
Exception in thread "main" Syntax error compiling var at (C:\Users\nenad\AppData\Local\Temp\form-init2299474071958135132.clj:1:5184).
at clojure.lang.Compiler.analyzeSeq(Compiler.java:7114)
at clojure.lang.Compiler.analyze(Compiler.java:6789)
at clojure.lang.Compiler.analyze(Compiler.java:6745)
at clojure.lang.Compiler$InvokeExpr.parse(Compiler.java:3888)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:7108)
at clojure.lang.Compiler.analyze(Compiler.java:6789)
at clojure.lang.Compiler.analyze(Compiler.java:6745)
at clojure.lang.Compiler$InvokeExpr.parse(Compiler.java:3888)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:7108)
at clojure.lang.Compiler.analyze(Compiler.java:6789)
at clojure.lang.Compiler.access$300(Compiler.java:38)
at clojure.lang.Compiler$LetExpr$Parser.parse(Compiler.java:6384)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:7106)
at clojure.lang.Compiler.analyze(Compiler.java:6789)
at clojure.lang.Compiler.analyze(Compiler.java:6745)
at clojure.lang.Compiler$BodyExpr$Parser.parse(Compiler.java:6120)
at clojure.lang.Compiler$FnMethod.parse(Compiler.java:5467)
at clojure.lang.Compiler$FnExpr.parse(Compiler.java:4029)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:7104)
at clojure.lang.Compiler.analyze(Compiler.java:6789)
at clojure.lang.Compiler.eval(Compiler.java:7173)
at clojure.lang.Compiler.eval(Compiler.java:7166)
at clojure.lang.Compiler.load(Compiler.java:7635)
at clojure.lang.Compiler.loadFile(Compiler.java:7573)
at clojure.main$load_script.invokeStatic(main.clj:452)
at clojure.main$init_opt.invokeStatic(main.clj:454)
at clojure.main$init_opt.invoke(main.clj:454)
at clojure.main$initialize.invokeStatic(main.clj:485)
at clojure.main$null_opt.invokeStatic(main.clj:519)
at clojure.main$null_opt.invoke(main.clj:516)
at clojure.main$main.invokeStatic(main.clj:598)
at clojure.main$main.doInvoke(main.clj:561)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.lang.Var.applyTo(Var.java:705)
at clojure.main.main(main.java:37)
Caused by: java.lang.RuntimeException: Unable to resolve var: lighttable.nrepl.handler/lighttable-ops in this context
at clojure.lang.Util.runtimeException(Util.java:221)
at clojure.lang.Compiler$TheVarExpr$Parser.parse(Compiler.java:720)
at clojure.lang.Compiler.analyzeSeq(Compiler.java:7106)
... 34 more
clojure.lang.ExceptionInfo: Subprocess failed {:exit-code 1}
at clojure.core$ex_info.invoke(core.clj:4593)
at leiningen.core.eval$fn__2432.invoke(eval.clj:236)
at clojure.lang.MultiFn.invoke(MultiFn.java:233)
at leiningen.core.eval$eval_in_project.invoke(eval.clj:337)
at clojure.lang.AFn.applyToHelper(AFn.java:160)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.core$apply.invoke(core.clj:632)
at leiningen.repl$repl.doInvoke(repl.clj:322)
at clojure.lang.RestFn.invoke(RestFn.java:425)
at leiningen.light_nrepl$light.invoke(light_nrepl.clj:77)
at leiningen.light_nrepl$_main.doInvoke(light_nrepl.clj:85)
at clojure.lang.RestFn.invoke(RestFn.java:397)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at leiningen.light_nrepl.main(Unknown Source)
close
I changed Clojure version in project.clj file in root of the project but now I cant start REPL. I got the follwing error:
Starting nREPL server...
"C:\Program Files\Java\jdk-11.0.2\bin\java.exe" -Dfile.encoding=Cp1252 -Dconf=dev-config.edn -Dclojure.compile.path=C:\Users\nenad\Desktop\Vezba\myapp\target\default\classes -Dmyapp.version=0.1.0-SNAPSHOT -Dclojure.debug=false "-javaagent:C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2018.3.5\lib\idea_rt.jar=49713:C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2018.3.5\bin" -classpath C:\Users\nenad\Desktop\Vezba\myapp\test\clj;C:\Users\nenad\Desktop\Vezba\myapp\env\dev\clj;C:\Users\nenad\Desktop\Vezba\myapp\src\clj;C:\Users\nenad\Desktop\Vezba\myapp\env\dev\resources;C:\Users\nenad\Desktop\Vezba\myapp\dev-resources;C:\Users\nenad\Desktop\Vezba\myapp\resources;C:\Users\nenad\Desktop\Vezba\myapp\target\default\classes;C:\Users\nenad\.m2\repository\metosin\reitit-core\0.3.1\reitit-core-0.3.1.jar;C:\Users\nenad\.m2\repository\ring\ring-core\1.7.1\ring-core-1.7.1.jar;C:\Users\nenad\.m2\repository\funcool\cuerdas\2.0.5\cuerdas-2.0.5.jar;C:\Users\nenad\.m2\repository\clojure-complete\clojure-complete\0.2.5\clojure-complete-0.2.5.jar;C:\Users\nenad\.m2\repository\org\clojure\clojure\1.8.0\clojure-1.8.0.jar;C:\Users\nenad\.m2\repository\cprop\cprop\0.1.13\cprop-0.1.13.jar;C:\Users\nenad\.m2\repository\org\msgpack\msgpack\0.6.12\msgpack-0.6.12.jar;C:\Users\nenad\.m2\repository\org\webjars\webjars-locator\0.36\webjars-locator-0.36.jar;C:\Users\nenad\.m2\repository\expound\expound\0.7.2\expound-0.7.2.jar;C:\Users\nenad\.m2\repository\lambdaisland\deep-diff\0.0-25\deep-diff-0.0-25.jar;C:\Users\nenad\.m2\repository\tigris\tigris\0.1.1\tigris-0.1.1.jar;C:\Users\nenad\.m2\repository\mvxcvi\arrangement\1.1.1\arrangement-1.1.1.jar;C:\Users\nenad\.m2\repository\org\projectodd\wunderboss\wunderboss-web\0.13.1\wunderboss-web-0.13.1.jar;C:\Users\nenad\.m2\repository\metosin\reitit-dev\0.3.1\reitit-dev-0.3.1.jar;C:\Users\nenad\.m2\repository\ch\qos\logback\logback-classic\1.1.3\logback-classic-1.1.3.jar;C:\Users\nenad\.m2\repository\metosin\reitit-swagger\0.3.1\reitit-swagger-0.3.1.jar;C:\Users\nenad\.m2\repository\selmer\selmer\1.12.12\selmer-1.12.12.jar;C:\Users\nenad\.m2\repository\metosin\reitit-schema\0.3.1\reitit-schema-0.3.1.jar;C:\Users\nenad\.m2\repository\com\cognitect\transit-clj\0.8.313\transit-clj-0.8.313.jar;C:\Users\nenad\.m2\repository\org\projectodd\wunderboss\wunderboss-core\0.13.1\wunderboss-core-0.13.1.jar;C:\Users\nenad\.m2\repository\org\clojure\core.rrb-vector\0.0.13\core.rrb-vector-0.0.13.jar;C:\Users\nenad\.m2\repository\meta-merge\meta-merge\1.0.0\meta-merge-1.0.0.jar;C:\Users\nenad\.m2\repository\joda-time\joda-time\2.9.9\joda-time-2.9.9.jar;C:\Users\nenad\.m2\repository\ring\ring-headers\0.3.0\ring-headers-0.3.0.jar;C:\Users\nenad\.m2\repository\pjstadig\humane-test-output\0.9.0\humane-test-output-0.9.0.jar;C:\Users\nenad\.m2\repository\javax\xml\bind\jaxb-api\2.3.0\jaxb-api-2.3.0.jar;C:\Users\nenad\.m2\repository\org\apache\commons\commons-lang3\3.8.1\commons-lang3-3.8.1.jar;C:\Users\nenad\.m2\repository\mount\mount\0.1.16\mount-0.1.16.jar;C:\Users\nenad\.m2\repository\ring\ring-ssl\0.3.0\ring-ssl-0.3.0.jar;C:\Users\nenad\.m2\repository\com\fasterxml\jackson\datatype\jackson-datatype-jsr310\2.9.7\jackson-datatype-jsr310-2.9.7.jar;C:\Users\nenad\.m2\repository\org\projectodd\wunderboss\wunderboss-web-undertow\0.13.1\wunderboss-web-undertow-0.13.1.jar;C:\Users\nenad\.m2\repository\metosin\reitit\0.3.1\reitit-0.3.1.jar;C:\Users\nenad\.m2\repository\metosin\reitit-spec\0.3.1\reitit-spec-0.3.1.jar;C:\Users\nenad\.m2\repository\commons-fileupload\commons-fileupload\1.3.3\commons-fileupload-1.3.3.jar;C:\Users\nenad\.m2\repository\io\undertow\undertow-core\1.4.14.Final\undertow-core-1.4.14.Final.jar;C:\Users\nenad\.m2\repository\metosin\reitit-middleware\0.3.1\reitit-middleware-0.3.1.jar;C:\Users\nenad\.m2\repository\prone\prone\1.6.1\prone-1.6.1.jar;C:\Users\nenad\.m2\repository\metosin\jsonista\0.2.2\jsonista-0.2.2.jar;C:\Users\nenad\.m2\repository\org\jboss\spec\javax\servlet\jboss-servlet-api_3.1_spec\1.0.0.Final\jboss-servlet-api_3.1_spec-1.0.0.Final.jar;C:\Users\nenad\.m2\repository\net\jodah\expiringmap\0.5.8\expiringmap-0.5.8.jar;C:\Users\nenad\.m2\repository\org\immutant\web\2.1.10\web-2.1.10.jar;C:\Users\nenad\.m2\repository\ring\ring-mock\0.3.2\ring-mock-0.3.2.jar;C:\Users\nenad\.m2\repository\prismatic\schema\1.1.9\schema-1.1.9.jar;C:\Users\nenad\.m2\repository\metosin\spec-tools\0.9.0\spec-tools-0.9.0.jar;C:\Users\nenad\.m2\repository\org\webjars\npm\material-icons\0.3.0\material-icons-0.3.0.jar;C:\Users\nenad\.m2\repository\org\clojure\tools.reader\0.10.0\tools.reader-0.10.0.jar;C:\Users\nenad\.m2\repository\org\ow2\asm\asm\5.1\asm-5.1.jar;C:\Users\nenad\.m2\repository\ring\ring-codec\1.1.1\ring-codec-1.1.1.jar;C:\Users\nenad\.m2\repository\nrepl\nrepl\0.6.0\nrepl-0.6.0.jar;C:\Users\nenad\.m2\repository\org\jboss\spec\javax\websocket\jboss-websocket-api_1.1_spec\1.1.0.Final\jboss-websocket-api_1.1_spec-1.1.0.Final.jar;C:\Users\nenad\.m2\repository\io\undertow\undertow-websockets-jsr\1.4.14.Final\undertow-websockets-jsr-1.4.14.Final.jar;C:\Users\nenad\.m2\repository\ns-tracker\ns-tracker\0.3.1\ns-tracker-0.3.1.jar;C:\Users\nenad\.m2\repository\org\clojure\java.classpath\0.2.3\java.classpath-0.2.3.jar;C:\Users\nenad\.m2\repository\org\webjars\webjars-locator-jboss-vfs\0.1.0\webjars-locator-jboss-vfs-0.1.0.jar;C:\Users\nenad\.m2\repository\commons-io\commons-io\2.6\commons-io-2.6.jar;C:\Users\nenad\.m2\repository\metosin\reitit-ring\0.3.1\reitit-ring-0.3.1.jar;C:\Users\nenad\.m2\repository\crypto-equality\crypto-equality\1.0.0\crypto-equality-1.0.0.jar;C:\Users\nenad\.m2\repository\tech\droit\clj-diff\1.0.0\clj-diff-1.0.0.jar;C:\Users\nenad\.m2\repository\org\jboss\xnio\xnio-api\3.3.6.Final\xnio-api-3.3.6.Final.jar;C:\Users\nenad\.m2\repository\org\projectodd\wunderboss\wunderboss-clojure\0.13.1\wunderboss-clojure-0.13.1.jar;C:\Users\nenad\.m2\repository\luminus-transit\luminus-transit\0.1.1\luminus-transit-0.1.1.jar;C:\Users\nenad\.m2\repository\org\jboss\xnio\xnio-nio\3.3.6.Final\xnio-nio-3.3.6.Final.jar;C:\Users\nenad\.m2\repository\metosin\reitit-sieppari\0.3.1\reitit-sieppari-0.3.1.jar;C:\Users\nenad\.m2\repository\com\cognitect\transit-js\0.8.846\transit-js-0.8.846.jar;C:\Users\nenad\.m2\repository\org\webjars\npm\bulma\0.7.4\bulma-0.7.4.jar;C:\Users\nenad\.m2\repository\metosin\ring-swagger-ui\2.2.10\ring-swagger-ui-2.2.10.jar;C:\Users\nenad\.m2\repository\commons-codec\commons-codec\1.11\commons-codec-1.11.jar;C:\Users\nenad\.m2\repository\com\googlecode\json-simple\json-simple\1.1.1\json-simple-1.1.1.jar;C:\Users\nenad\.m2\repository\hiccup\hiccup\1.0.5\hiccup-1.0.5.jar;C:\Users\nenad\.m2\repository\org\webjars\webjars-locator-core\0.37\webjars-locator-core-0.37.jar;C:\Users\nenad\.m2\repository\clj-tuple\clj-tuple\0.2.2\clj-tuple-0.2.2.jar;C:\Users\nenad\.m2\repository\mvxcvi\puget\1.0.3\puget-1.0.3.jar;C:\Users\nenad\.m2\repository\clj-time\clj-time\0.14.3\clj-time-0.14.3.jar;C:\Users\nenad\.m2\repository\json-html\json-html\0.4.4\json-html-0.4.4.jar;C:\Users\nenad\.m2\repository\org\clojure\tools.logging\0.4.1\tools.logging-0.4.1.jar;C:\Users\nenad\.m2\repository\metosin\reitit-swagger-ui\0.3.1\reitit-swagger-ui-0.3.1.jar;C:\Users\nenad\.m2\repository\org\jboss\spec\javax\annotation\jboss-annotations-api_1.2_spec\1.0.0.Final\jboss-annotations-api_1.2_spec-1.0.0.Final.jar;C:\Users\nenad\.m2\repository\ring\ring-devel\1.7.1\ring-devel-1.7.1.jar;C:\Users\nenad\.m2\repository\luminus-immutant\luminus-immutant\0.2.5\luminus-immutant-0.2.5.jar;C:\Users\nenad\.m2\repository\org\clojure\tools.cli\0.4.2\tools.cli-0.4.2.jar;C:\Users\nenad\.m2\repository\org\clojure\tools.namespace\0.2.11\tools.namespace-0.2.11.jar;C:\Users\nenad\.m2\repository\metosin\ring-http-response\0.9.1\ring-http-response-0.9.1.jar;C:\Users\nenad\.m2\repository\potemkin\potemkin\0.4.5\potemkin-0.4.5.jar;C:\Users\nenad\.m2\repository\expiring-map\expiring-map\0.1.8\expiring-map-0.1.8.jar;C:\Users\nenad\.m2\repository\com\cognitect\transit-java\0.8.337\transit-java-0.8.337.jar;C:\Users\nenad\.m2\repository\metosin\reitit-interceptors\0.3.1\reitit-interceptors-0.3.1.jar;C:\Users\nenad\.m2\repository\org\slf4j\slf4j-api\1.7.7\slf4j-api-1.7.7.jar;C:\Users\nenad\.m2\repository\virgil\virgil\0.1.6\virgil-0.1.6.jar;C:\Users\nenad\.m2\repository\metosin\schema-tools\0.11.0\schema-tools-0.11.0.jar;C:\Users\nenad\.m2\repository\ring-webjars\ring-webjars\0.2.0\ring-webjars-0.2.0.jar;C:\Users\nenad\.m2\repository\luminus\ring-ttl-session\0.3.2\ring-ttl-session-0.3.2.jar;C:\Users\nenad\.m2\repository\metosin\muuntaja\0.6.4\muuntaja-0.6.4.jar;C:\Users\nenad\.m2\repository\crypto-random\crypto-random\1.2.0\crypto-random-1.2.0.jar;C:\Users\nenad\.m2\repository\javax\servlet\javax.servlet-api\3.1.0\javax.servlet-api-3.1.0.jar;C:\Users\nenad\.m2\repository\riddley\riddley\0.1.12\riddley-0.1.12.jar;C:\Users\nenad\.m2\repository\com\fasterxml\jackson\core\jackson-databind\2.9.8\jackson-databind-2.9.8.jar;C:\Users\nenad\.m2\repository\metosin\sieppari\0.0.0-alpha7\sieppari-0.0.0-alpha7.jar;C:\Users\nenad\.m2\repository\com\cognitect\transit-cljs\0.8.256\transit-cljs-0.8.256.jar;C:\Users\nenad\.m2\repository\fipp\fipp\0.6.17\fipp-0.6.17.jar;C:\Users\nenad\.m2\repository\markdown-clj\markdown-clj\1.0.7\markdown-clj-1.0.7.jar;C:\Users\nenad\.m2\repository\org\immutant\core\2.1.10\core-2.1.10.jar;C:\Users\nenad\.m2\repository\cheshire\cheshire\5.8.1\cheshire-5.8.1.jar;C:\Users\nenad\.m2\repository\io\undertow\undertow-servlet\1.4.14.Final\undertow-servlet-1.4.14.Final.jar;C:\Users\nenad\.m2\repository\com\fasterxml\jackson\core\jackson-core\2.9.6\jackson-core-2.9.6.jar;C:\Users\nenad\.m2\repository\com\fasterxml\jackson\dataformat\jackson-dataformat-cbor\2.9.6\jackson-dataformat-cbor-2.9.6.jar;C:\Users\nenad\.m2\repository\com\fasterxml\jackson\core\jackson-annotations\2.9.0\jackson-annotations-2.9.0.jar;C:\Users\nenad\.m2\repository\metosin\reitit-frontend\0.3.1\reitit-frontend-0.3.1.jar;C:\Users\nenad\.m2\repository\ch\qos\logback\logback-core\1.1.3\logback-core-1.1.3.jar;C:\Users\nenad\.m2\repository\com\andrewmcveigh\cljs-time\0.5.2\cljs-time-0.5.2.jar;C:\Users\nenad\.m2\repository\org\jboss\logging\jboss-logging\3.2.1.Final\jboss-logging-3.2.1.Final.jar;C:\Users\nenad\.m2\repository\ring\ring-anti-forgery\1.3.0\ring-anti-forgery-1.3.0.jar;C:\Users\nenad\.m2\repository\realize\realize\1.1.0\realize-1.1.0.jar;C:\Users\nenad\.m2\repository\org\apache\commons\commons-compress\1.18\commons-compress-1.18.jar;C:\Users\nenad\.m2\repository\org\javassist\javassist\3.18.1-GA\javassist-3.18.1-GA.jar;C:\Users\nenad\.m2\repository\funcool\struct\1.3.0\struct-1.3.0.jar;C:\Users\nenad\.m2\repository\metosin\reitit-http\0.3.1\reitit-http-0.3.1.jar;C:\Users\nenad\.m2\repository\clojure\java-time\clojure.java-time\0.3.2\clojure.java-time-0.3.2.jar;C:\Users\nenad\.m2\repository\ring\ring-defaults\0.3.2\ring-defaults-0.3.2.jar;C:\Users\nenad\.m2\repository\org\clojure\spec.alpha\0.2.176\spec.alpha-0.2.176.jar;C:\Users\nenad\.m2\repository\com\fasterxml\jackson\dataformat\jackson-dataformat-smile\2.9.6\jackson-dataformat-smile-2.9.6.jar;C:\Users\nenad\.m2\repository\clj-stacktrace\clj-stacktrace\0.2.8\clj-stacktrace-0.2.8.jar clojure.main -i C:\Users\nenad\AppData\Local\Temp\form-init47301827214867056.clj
java.lang.ExceptionInInitializerError
at clojure.main.<clinit>(main.java:20)
Caused by: java.lang.ExceptionInInitializerError, compiling:(user.clj:1:1)
at clojure.lang.Compiler.load(Compiler.java:7391)
at clojure.lang.RT.loadResourceScript(RT.java:372)
at clojure.lang.RT.loadResourceScript(RT.java:359)
at clojure.lang.RT.maybeLoadResourceScript(RT.java:355)
at clojure.lang.RT.doInit(RT.java:475)
at clojure.lang.RT.<clinit>(RT.java:331)
... 1 more
Caused by: java.lang.ExceptionInInitializerError
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at clojure.lang.RT.classForName(RT.java:2168)
at clojure.lang.RT.classForName(RT.java:2177)
at clojure.lang.RT.loadClassForName(RT.java:2196)
at clojure.lang.RT.load(RT.java:443)
at clojure.lang.RT.load(RT.java:419)
at clojure.core$load$fn__5677.invoke(core.clj:5893)
at clojure.core$load.invokeStatic(core.clj:5892)
at clojure.core$load.doInvoke(core.clj:5876)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invokeStatic(core.clj:5697)
at clojure.core$load_one.invoke(core.clj:5692)
at clojure.core$load_lib$fn__5626.invoke(core.clj:5737)
at clojure.core$load_lib.invokeStatic(core.clj:5736)
at clojure.core$load_lib.doInvoke(core.clj:5717)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invokeStatic(core.clj:648)
at clojure.core$load_libs.invokeStatic(core.clj:5774)
at clojure.core$load_libs.doInvoke(core.clj:5758)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invokeStatic(core.clj:648)
at clojure.core$require.invokeStatic(core.clj:5796)
at clojure.core$require.doInvoke(core.clj:5796)
at clojure.lang.RestFn.invoke(RestFn.java:482)
at user$eval3$loading__5569__auto____4.invoke(user.clj:1)
at user$eval3.invokeStatic(user.clj:1)
at user$eval3.invoke(user.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6927)
at clojure.lang.Compiler.eval(Compiler.java:6916)
at clojure.lang.Compiler.load(Compiler.java:7379)
... 6 more
Caused by: java.lang.IllegalStateException: Attempting to call unbound fn: #'clojure.core/ident?
at clojure.lang.Var$Unbound.throwArity(Var.java:43)
at clojure.lang.AFn.invoke(AFn.java:32)
at clojure.spec.alpha$spec_impl.invokeStatic(alpha.clj:915)
at clojure.spec.alpha$spec_impl.invoke(alpha.clj:908)
at clojure.spec.alpha__init.load(Unknown Source)
at clojure.spec.alpha__init.<clinit>(Unknown Source)
... 37 more
Exception in thread "main"
Process finished with exit code 1
Exception starting REPL: java.lang.InterruptedException
Hmmm. The problem here is, LT doesn't support a Clojure project running Clojure >1.8.0. Yes, LT needs to update to support new Clojure/Script versions. There are WIP patches to help make this possible. For now, sadly, you'll have to drop down to older releases of Clojure.
I've updated the addon ember-attacher in my ember project from v0.11.4 to v0.13.0 and now i'm getting the below error:
Uncaught Error: Could not find module `#ember/component` imported from `#ember-decorators/argument/-debug/validated-component`
at missingModule (loader.js:247)
at findModule (loader.js:258)
at Module.findDeps (loader.js:168)
at findModule (loader.js:262)
at Module.findDeps (loader.js:168)
at findModule (loader.js:262)
at Module.findDeps (loader.js:168)
at findModule (loader.js:262)
at Module.findDeps (loader.js:168)
at findModule (loader.js:262)
I think there is some problem with #ember/component not available/loaded yet, when ember-decorators wants to use it. But i don't know how to track down what's causing the issue.
There are no other problems/execptions besides the module error (only some caught ones from babel-polyfill).
Deleting node_modules for a fresh dependencies installation didn't changed anything. It also happens in both, production and delevopment build.
I also tried to reproduce the same issue in a fresh ember-cli project, but didn't managed to do so (created an empty project, added all the same dependencies and config settings, created a test page showing an ember-attacher tooltip and popover).
So it must be a special case, but i don't know what causes the problem.
How can i get more insight about why the error occurs? Are there any hints available from debbuging, which i didn't recognized?
I can't provide a repository of our project, but may be able to provide individual code parts, if specific information is needed.
ember -v --verbose:
ember-cli: 2.17.0
http_parser: 2.7.0
node: 8.9.1
v8: 6.1.534.47
uv: 1.15.0
zlib: 1.2.11
ares: 1.10.1-DEV
modules: 57
nghttp2: 1.25.0
openssl: 1.0.2m
icu: 59.1
unicode: 9.0
cldr: 31.0.1
tz: 2017b
os: win32 x64
Dependencies:
"broccoli-asset-rev": "^2.6.0",
"ember-ajax": "^3.0.0",
"ember-attacher": "^0.13.0",
"ember-chrome-devtools": "^0.2.0",
"ember-cli": "~2.17.0",
"ember-cli-app-version": "^3.1.3",
"ember-cli-autoprefixer": "^0.8.1",
"ember-cli-babel": "^6.10.0",
"ember-cli-dependency-checker": "^2.1.0",
"ember-cli-eslint": "^4.2.2",
"ember-cli-htmlbars": "^2.0.3",
"ember-cli-htmlbars-inline-precompile": "^1.0.2",
"ember-cli-inject-live-reload": "^1.7.0",
"ember-cli-moment-shim": "^3.5.0",
"ember-cli-qunit": "^4.1.1",
"ember-cli-sass": "^7.1.2",
"ember-cli-shims": "^1.2.0",
"ember-cli-sri": "^2.1.1",
"ember-cli-string-helpers": "1.5.0",
"ember-cli-uglify": "^2.0.0",
"ember-cli-windows-addon": "^1.3.1",
"ember-cli-yuidoc": "^0.8.8",
"ember-composable-helpers": "^2.0.3",
"ember-cp-validations": "^3.5.1",
"ember-crumbly": "^2.0.0-alpha.1",
"ember-data": "~2.17.0",
"ember-debounced-properties": "0.0.5",
"ember-drag-drop": "^0.4.6",
"ember-export-application-global": "^2.0.0",
"ember-intl": "^2.31.1",
"ember-intl-cp-validations": "^3.0.1",
"ember-load-initializers": "^1.0.0",
"ember-md5": "^1.0.1",
"ember-moment": "^7.5.0",
"ember-paper": "~1.0.0-beta.3",
"ember-pikaday": "^2.2.3",
"ember-pouch": "^4.3.0",
"ember-promise-helpers": "^1.0.3",
"ember-resolver": "^4.5.0",
"ember-responsive": "^2.0.5",
"ember-route-action-helper": "^2.0.6",
"ember-shepherd": "^2.8.0",
"ember-simple-auth": "^1.4.0",
"ember-source": "~2.17.0",
"ember-tether": "^1.0.0-beta.0",
"ember-timepicker": "^0.2.0",
"ember-transition-helper": "^0.0.6",
"ember-truth-helpers": "^2.0.0",
"ember-user-activity": "^0.9.0",
"ember-uuid": "^1.0.0",
"ember-wormhole": "^0.5.3",
"eslint": "^4.12.1",
"eslint-plugin-ember": "^5.0.1",
"loader.js": "^4.6.0",
"paper-time-picker": "^0.1.15",
"pouchdb-authentication": "^0.5.5",
"string_score": "^0.1.22",
"yuidoc-ember-theme": "^1.4.0"
Any help or tipps are appreciated. Already thanks for taking the time to read my post.
I followed http://aparnaank.blogspot.in/2014/03/how-to-configure-wso2-bps-workermanager.html link for clustering WSO2 BPS. When all servers started and ELB showing that two members has joined.
When i click (Try It) for a web service i am getting following error
[2014-12-30 15:30:38,237] ERROR {org.apache.catalina.core.StandardWrapperValve} - Servlet.service() for servlet [bridgeservlet] in context with path [/] threw
org.apache.axis2.AxisFault
at org.apache.axis2.AxisFault.makeFault(AxisFault.java:430)
at org.wso2.carbon.core.transports.CarbonServlet.doGet(CarbonServlet.java:155)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:735)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:61)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:178)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:49)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:141)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:156)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:936)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:52)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1653)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.NullPointerException
at java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
at org.wso2.carbon.core.multitenancy.utils.TenantAxisUtils.getTenantConfigurationContext(TenantAxisUtils.java:120)
at org.wso2.carbon.core.multitenancy.utils.TenantAxisUtils.getTenantAxisConfiguration(TenantAxisUtils.java:104)
at org.wso2.carbon.wsdl2form.WSDL2FormGenerator.getAxisService(WSDL2FormGenerator.java:702)
at org.wso2.carbon.wsdl2form.WSDL2FormGenerator.getInternalTryit(WSDL2FormGenerator.java:112)
at org.wso2.carbon.tryit.TryitRequestProcessor.process(TryitRequestProcessor.java:49)
at org.wso2.carbon.core.transports.CarbonServlet.processWithGetProcessor(CarbonServlet.java:182)
at org.wso2.carbon.core.transports.CarbonServlet.doGet(CarbonServlet.java:145)
... 31 more
What to i do now
Thanks in advance....
When you click on try it, what's the URL it invokes?
set proxy ports in workers nodes and see
Go to /repository/conf/tomcat/catalina-server.xml and set the proxy ports as follows
<Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
port="9763"
proxyPort="8280"
<Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
port="9443"
proxyPort="8243"
Finally i resolved this error. This is because of SVN repo, I resolved it by configure correctly SVN repo.
When I login to the my WSO2 API store, I get the following exception.
Can someone please let me know where to look into for resolving this issue on the wso2 installation directory
Error Log :
2014-11-06 09:40:16,992 [-] [http-nio-9443-exec-24] ERROR RhinoEngine org.mozilla.javascript.WrappedException: Wrapped com.google.gson.JsonSyntaxException: com.google.gson.stream.MalformedJsonException: Expected EOF at line 1 column 11 (http#17)
2014-11-06 09:40:16,994 [-] [http-nio-9443-exec-24] ERROR WebAppManager org.mozilla.javascript.WrappedException: Wrapped com.google.gson.JsonSyntaxException: com.google.gson.stream.MalformedJsonException: Expected EOF at line 1 column 11 (http#17)
org.jaggeryjs.scriptengine.exceptions.ScriptException: org.mozilla.javascript.WrappedException: Wrapped com.google.gson.JsonSyntaxException: com.google.gson.stream.MalformedJsonException: Expected EOF at line 1 column 11 (http#17)
at org.jaggeryjs.scriptengine.engine.RhinoEngine.execScript(RhinoEngine.java:575)
at org.jaggeryjs.scriptengine.engine.RhinoEngine.exec(RhinoEngine.java:273)
at org.jaggeryjs.jaggery.core.manager.WebAppManager.execute(WebAppManager.java:432)
at org.jaggeryjs.jaggery.core.JaggeryServlet.doGet(JaggeryServlet.java:24)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:735)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:749)
at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:487)
at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:379)
at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:339)
at org.jaggeryjs.jaggery.core.JaggeryFilter.doFilter(JaggeryFilter.java:21)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:178)
at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:56)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:141)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:156)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:936)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:52)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1653)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Thanks
Kranthi
We put in some logging in the API gateway and found that the API gateway is trying to access an url which its not able to get to. We fixed the url and the issue was resolved.
Error log:
Caused by: org.mozilla.javascript.WrappedException: Wrapped com.google.gson.JsonSyntaxException: com.google.gson.stream.MalformedJsonException: Expected EOF at line 1 column 11 (http#17)
at org.mozilla.javascript.Context.throwAsScriptRuntimeEx(Context.java:1754)
at org.mozilla.javascript.MemberBox.invoke(MemberBox.java:148)
at org.mozilla.javascript.FunctionObject.call(FunctionObject.java:386)
at org.mozilla.javascript.optimizer.OptRuntime.callName(OptRuntime.java:63)
at org.mozilla.javascript.gen.http_1._c_anonymous_2(http:17)
at org.mozilla.javascript.gen.http_1.call(http)
at org.mozilla.javascript.optimizer.OptRuntime.callName(OptRuntime.java:63)
at org.mozilla.javascript.gen.http_1._c_anonymous_5(http:195)
at org.mozilla.javascript.gen.http_1.call(http)
at org.mozilla.javascript.optimizer.OptRuntime.callName(OptRuntime.java:63)
at org.mozilla.javascript.gen.http_1._c_anonymous_7(http:202)
at org.mozilla.javascript.gen.http_1.call(http)
at org.mozilla.javascript.optimizer.OptRuntime.callName(OptRuntime.java:63)
at org.jaggeryjs.rhino.store.site.themes.fancy.templates.page.base.c0._c_anonymous_1(/store/site/themes/fancy/templates/page/base/template.jag:90)
template.jag code :
var acResponse = get(url, data ,"json");
Hope it helps others when you get the similar error ..
Best Wishes
Kranthi