While fetching the twitter data to HDFS using FLUME , I m getting this error again and again as far as i have changed the versions of the twitter4j.jar files ,please tell me why this error is coming.Can anyone suggest me what will be the next step for fetching the data in HDFS ;
(conf-file-poller-0) [DEBUG
-org.apache.flume.source.DefaultSourceFactory.getClass(DefaultSourceFactory.java:60)]
Source type org.apache.flume.source.twitter.TwitterSource is a custom
type 2017-11-01 15:29:12,648 (conf-file-poller-0) [ERROR -
org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:150)]
Unhandled error java.lang.NoSuchMethodError:
twitter4j.TwitterStream.addListener(Ltwitter4j/StatusListener;)V at
org.apache.flume.source.twitter.TwitterSource.configure(TwitterSource.java:114)
at
org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at
org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:326)
at
org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:101)
at
org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:141)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Resolved by adding Twitter4j Stream » 3.0.3 & Twitter4j Core » 3.0.3 .
Related
Can see error in logs multiple times for my pyspark job in dataproc, but the job doesn't exit and keeps on running for multiple hours.
Any help to solve this is much appreciated.
The data on which the job is running is very small also.
Sometimes after rerun, the code job runs fine. But it picks up this issue randomly
Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
executor.scala:318)
at org.apache.spark.executor.Executor.$anonfun$stopHookReference$1(Executor.scala:76)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:214)
at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$2(ShutdownHookManager.scala:188)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$1(ShutdownHookManager.scala:188)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
WARN org.apache.spark.executor.Executor: Unable to stop heartbeater
java.lang.NullPointerException
at org.apache.spark.executor.Executor.stop(Executor.scala:324)
at org.apache.spark.executor.Executor.$anonfun$stopHookReference$1(Executor.scala:76)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:214)
at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$2(ShutdownHookManager.scala:188)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$1(ShutdownHookManager.scala:188)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
ERROR org.apache.spark.util.Utils: Uncaught exception in thread shutdown-hook-0
java.lang.NullPointerException
at org.apache.spark.executor.Executor.$anonfun$stop$3(Executor.scala:333)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:222)
at org.apache.spark.executor.Executor.stop(Executor.scala:333)
at org.apache.spark.executor.Executor.$anonfun$stopHookReference$1(Executor.scala:76)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:214)
at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$2(ShutdownHookManager.scala:188)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$1(ShutdownHookManager.scala:188)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
.
WARN org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Requesting driver to remove executor 925 for reason Container from a bad node: container_1657869605389_0001_01_000925 on host: xxxx Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1657869605389_0001_01_000925
Exit code: 1
This seems to be SPARK-36383 which affects 3.0.3, 3.1.2, 3.2.0, 3.3.0 and was fixed in 3.2.0, 3.3.0.
We can backport the patch to Spark 3.1 in Dataproc 2.0. Stay tuned on the release notes.
I've downloaded latest version of kie-workbench 7.7.0 which is compatible with wildFly-11 . Kept kie-workbench-7.7.0.war into deployments folder of wildFly 11. Starting from bin folder using following command standalone.bat --server-config=standalone-full.xml. Then I'm getting the following issue. Can anyone say what I'm doing wrong here? If I take older version of kie-workbench 6.5.0 with wildFly10 it's working fine.I'm doing this in Windows10 OS. Thanks in advance.
19:16:43,847 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 82) MSC000001: Failed to start service jboss.undertow.deployment.default-server.default-host./kie-wb: org.jboss.msc.service.StartException in service jboss.undertow.deployment.default-server.default-host./kie-wb: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.IllegalArgumentException: Parameter 'directory' is not a directory: D:\downloads\drools%20workbench7\wildfly-11.0.0.Final\standalone\tmp\vfs\temp\temp7606482f82f9e84e\content-3e547a5371ce9421
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:84)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at org.jboss.threads.JBossThread.run(JBossThread.java:320)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.IllegalArgumentException: Parameter 'directory' is not a directory: D:\downloads\drools%20workbench7\wildfly-11.0.0.Final\standalone\tmp\vfs\temp\temp7606482f82f9e84e\content-3e547a5371ce9421
at io.undertow.servlet.core.DeploymentManagerImpl.deploy(DeploymentManagerImpl.java:241)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService.startContext(UndertowDeploymentService.java:99)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:81)
... 6 more
Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: Parameter 'directory' is not a directory: D:\downloads\drools%20workbench7\wildfly-11.0.0.Final\standalone\tmp\vfs\temp\temp7606482f82f9e84e\content-3e547a5371ce9421
at org.uberfire.backend.server.plugins.engine.PluginManager.init(PluginManager.java:79)
at org.uberfire.backend.server.plugins.PluginService.init(PluginService.java:79)
at org.uberfire.backend.server.plugins.PluginStartup.contextInitialized(PluginStartup.java:32)
at io.undertow.servlet.core.ApplicationListeners.contextInitialized(ApplicationListeners.java:187)
at io.undertow.servlet.core.DeploymentManagerImpl$1.call(DeploymentManagerImpl.java:205)
at io.undertow.servlet.core.DeploymentManagerImpl$1.call(DeploymentManagerImpl.java:174)
at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:42)
at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at io.undertow.servlet.core.DeploymentManagerImpl.deploy(DeploymentManagerImpl.java:239)
... 8 more
Caused by: java.lang.IllegalArgumentException: Parameter 'directory' is not a directory: D:\downloads\drools%20workbench7\wildfly-11.0.0.Final\standalone\tmp\vfs\temp\temp7606482f82f9e84e\content-3e547a5371ce9421
at org.apache.commons.io.FileUtils.validateListFilesParameters(FileUtils.java:536)
at org.apache.commons.io.FileUtils.listFiles(FileUtils.java:512)
at org.apache.commons.io.FileUtils.listFiles(FileUtils.java:684)
2018-03-08 16:36:16,775 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Downloading public rsrc:{ hdfs://mycluster/user/abc_user/udf/pig_udf-1.5.7_handle_input_error.jar, 1516336589685, FILE, null }
2018-03-08 16:36:16,775 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Failed to download resource { { hdfs://mycluster/user/oozie/share/lib/lib_20171215093741/pig/libgplcompression.so.0.0.0, 1513307849411, FILE, null },pending,[(container_1519371600813_0002_02_000001)],8140205165392614,DOWNLOADING}
java.lang.IllegalArgumentException: java.net.UnknownHostException: mycluster
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:406)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:728)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:671)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:155)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2815)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2852)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2834)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:249)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecumytor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.UnknownHostException: mycluster
Yarn-nodemanager service and Data-node service is on the same machine
Yarn-resource-manager service and NameNode in on the same machine
When run a simple pig script load data and print . I met above error .
Before add standby Namnode everything work well.
How can I config yarn to understand my NameNode Cluster
Thanks you
After check again hdfs-site.xml on 2 DataNode where Yarn Node Manager stand on , I see that the hdfs-site file missing this line when compare with the hdfs-site on Name Node
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
It working now
I have defined a KafkaReceiver usind WSO2-CEP in management console.
It doesn't work and I can read this error on server-console log
[2016-02-26 10:17:25,412] INFO {org.wso2.carbon.event.input.adapter.core.internal.InputAdapterRuntime} - Connecting receiver kafkaReceiver
[2016-02-26 10:17:25,415] ERROR {org.wso2.carbon.event.input.adapter.core.internal.InputAdapterRuntime} - Error initializing Input Adapter 'kafkaReceiver, hence this will be suspended indefinitely, Cannot access kafka context due to missing jars
org.wso2.carbon.event.input.adapter.core.exception.InputEventAdapterRuntimeException: Cannot access kafka context due to missing jars
at org.wso2.carbon.event.input.adapter.kafka.KafkaEventAdapter.createConsumerConfig(KafkaEventAdapter.java:114)
at org.wso2.carbon.event.input.adapter.kafka.KafkaEventAdapter.createKafkaAdaptorListener(KafkaEventAdapter.java:132)
at org.wso2.carbon.event.input.adapter.kafka.KafkaEventAdapter.connect(KafkaEventAdapter.java:66)
at org.wso2.carbon.event.input.adapter.core.internal.InputAdapterRuntime.start(InputAdapterRuntime.java:72)
at org.wso2.carbon.event.input.adapter.core.internal.InputAdapterRuntime.startPolling(InputAdapterRuntime.java:62)
at org.wso2.carbon.event.input.adapter.core.internal.CarbonInputEventAdapterService.startPolling(CarbonInputEventAdapterService.java:187)
at org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverService.startPolling(CarbonEventReceiverService.java:620)
at org.wso2.carbon.event.receiver.core.internal.CarbonEventReceiverManagementService.startPolling(CarbonEventReceiverManagementService.java:99)
at org.wso2.carbon.event.processor.manager.core.internal.CarbonEventManagementService$2.run(CarbonEventManagementService.java:184)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError: kafka/consumer/ConsumerConfig
at org.wso2.carbon.event.input.adapter.kafka.KafkaEventAdapter.createConsumerConfig(KafkaEventAdapter.java:112)
... 15 more
Caused by: java.lang.ClassNotFoundException: kafka.consumer.ConsumerConfig cannot be found by org.wso2.carbon.event.input.adapter.kafka_5.0.3
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:501)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 16 more
If we read the class source at this link https://github.com/wso2/carbon-analytics-common/blob/master/components/event-receiver/event-input-adapters/org.wso2.carbon.event.input.adapter.kafka/src/main/java/org/wso2/carbon/event/input/adapter/kafka/KafkaEventAdapter.java
I read that the error is in ConsumerConfig creation
In order to use kafka you need to add kafka download and copy following client JAR files to /repository/components/lib/ directory
kafka_2.10-0.8.2.1.jar
zkclient-0.3.jar
scala-library-2.10.4.jar
zookeeper-3.4.6.jar
metrics-core-2.2.0.jar
kafka-clients-0.8.2.1.jar
[1]. https://docs.wso2.com/display/CEP400/Supporting+Different+Transports#SupportingDifferentTransports-KafkaTransport
I have a BPEL deployed on WSO2 BPS 3.2.0 that invokes a webservice from w3schools.com and returns the result back to the BPEL invoker. When executing the BPEL service through SoapUI, i received a Timeout Exception, however upon close observation in the logs, i came across this error
TID: [0] [BPS] [2014-08-19 19:04:02,827] ERROR {org.apache.ode.jacob.vpu.JacobVPU} - Method "run" in class "org.apache.ode.bpel.runtime.INVOKE" threw an unexpected exception. {org.apache.ode.jacob.vpu.JacobVPU}
java.lang.NoClassDefFoundError: org/apache/synapse/core/axis2/SOAPUtils
at org.wso2.carbon.unifiedendpoint.core.UnifiedEndpointHandler.handleMessageOutput(UnifiedEndpointHandler.java:105)
at org.wso2.carbon.unifiedendpoint.core.UnifiedEndpointHandler.invoke(UnifiedEndpointHandler.java:59)
at org.apache.axis2.engine.Phase.invokeHandler(Phase.java:340)
at org.apache.axis2.engine.Phase.invoke(Phase.java:313)
at org.apache.axis2.engine.AxisEngine.invoke(AxisEngine.java:261)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:426)
at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:430)
at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:225)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
at org.wso2.carbon.bpel.core.ode.integration.utils.AxisServiceUtils.invokeService(AxisServiceUtils.java:305)
at org.wso2.carbon.bpel.core.ode.integration.PartnerService.invoke(PartnerService.java:324)
at org.wso2.carbon.bpel.core.ode.integration.BPELMessageExchangeContextImpl.invokePartner(BPELMessageExchangeContextImpl.java:43)
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.invoke(BpelRuntimeContextImpl.java:793)
at org.apache.ode.bpel.runtime.INVOKE.run(INVOKE.java:140)
at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.ode.jacob.vpu.JacobVPU$JacobThreadImpl.run(JacobVPU.java:451)
at org.apache.ode.jacob.vpu.JacobVPU.execute(JacobVPU.java:139)
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.execute(BpelRuntimeContextImpl.java:898)
at org.apache.ode.bpel.engine.PartnerLinkMyRoleImpl.invokeNewInstance(PartnerLinkMyRoleImpl.java:208)
at org.apache.ode.bpel.engine.BpelProcess$1.invoke(BpelProcess.java:283)
at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java:224)
at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java:279)
at org.apache.ode.bpel.engine.BpelProcess.handleJobDetails(BpelProcess.java:434)
at org.apache.ode.bpel.engine.BpelEngineImpl.onScheduledJob(BpelEngineImpl.java:558)
at org.apache.ode.bpel.engine.BpelServerImpl.onScheduledJob(BpelServerImpl.java:467)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob$1.call(SimpleScheduler.java:536)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob$1.call(SimpleScheduler.java:530)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:280)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:235)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:530)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:514)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.ClassNotFoundException: org.apache.synapse.core.axis2.SOAPUtils
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:501)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 37 more
Upon close inspection , it seems that the %CARBON_HOME%/repository/components/plugins folder doesnt contain the
synapse-core_2.1.2.wso2v4.jar
I tried placing the jar file directly in the folder, and have also tried using the patch mechansim
Created a folder patch0006 in %CARBON_HOME%/repository/components/patches
Place the jar file in this new folder
Restart the server
Patch Mechanism successfully places the file in %CARBON_HOME%/repository/components/plugins folder.
But to no avail, the server still is unable to find the SOAPUtils class file. Any ideas, how to proceed further ?