Error while processing a persisted job checkDuplicateCSetKey - wso2

TID: [-1] [] [2019-11-22 13:18:34,362] WARN {org.apache.ode.scheduler.simple.SimpleScheduler} - Error while processing a persisted job: Job hqejbhcnphreqf4l2mpcoj time: 2019-11-22 13:18:31 WEST transacted: true persisted: true details: JobDetails( instanceId: null mexId: hqejbhcnphreqf4l2mpcoi processId: {http://wso2.org/bps/sample}my-process-7 type: INVOKE_INTERNAL channel: null correlatorId: null correlationKeySet: null retryCount: 4 inMem: false detailsExt: {enqueue=false}) {org.apache.ode.scheduler.simple.SimpleScheduler}
java.lang.NullPointerException
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.checkDuplicateCSetKey(BpelRuntimeContextImpl.java:621)
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.checkDuplicateCSets(BpelRuntimeContextImpl.java:578)
at org.apache.ode.bpel.runtime.PICK$WAITING$2.onRequestRcvd(PICK.java:300)
at sun.reflect.GeneratedMethodAccessor1427.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.ode.jacob.vpu.JacobVPU$JacobThreadImpl.run(JacobVPU.java:451)
at org.apache.ode.jacob.vpu.JacobVPU.execute(JacobVPU.java:139)
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.execute(BpelRuntimeContextImpl.java:1002)
at org.apache.ode.bpel.engine.PartnerLinkMyRoleImpl.invokeNewInstance(PartnerLinkMyRoleImpl.java:208)
at org.apache.ode.bpel.engine.BpelProcess$1.invoke(BpelProcess.java:283)
at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java:224)
at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java:279)
at org.apache.ode.bpel.engine.BpelProcess.handleJobDetails(BpelProcess.java:434)
at org.apache.ode.bpel.engine.BpelEngineImpl.sendMyRoleFault(BpelEngineImpl.java:835)
at org.apache.ode.bpel.engine.BpelEngineImpl.onScheduledJob(BpelEngineImpl.java:581)
at org.apache.ode.bpel.engine.BpelServerImpl.onScheduledJob(BpelServerImpl.java:467)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob$1.call(SimpleScheduler.java:633)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob$1.call(SimpleScheduler.java:627)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:298)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:253)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:627)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:611)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(T

The problem is due to the data:
A BPEL process had 3 versions (v1, v2, v3)
Version v1 has been removed (from the registry and bpel / -123) but these instances still remained in the database
An old instance of version v1 remained in ACTIVE status with an id correlation = 400200 (for example).
When starting a new instance of version v3 with a correlation id = 400200 the exception is raised.
Indeed apache ODE to each new instance looks for if there is an instance in active status and carrying the same correlation id (checkDuplicatCS ..). In our context, Apache ODE finds an instance of version v1 and goes back NullpointerException because it does not find the process v1 in its registry.
Solution: Clean the old instances in Active status of version v1.

Related

WSO2 Streaming Integrator Tooling(connect to MQTT broker) - "no NetworkModule installed for scheme "tcp""

I am trying to run Siddhi application in Streaming Integrator Tooling.
Here's my app:
#App:name('1')
#App:description('Description of the plan')
#source(type = 'mqtt', url = "tcp://192.168.100.82:1883", client.id = "1", topic = "mqtt_topic_input",
#map(type = 'xml'))
define stream SweetProductionStream (name string, amount double);
#sink(type = 'log')
define stream LowProductionAlertStream (name string, amount double);
-- passthrough data in the SweetProductionStream into LowProducitonAlertStream
#info(name = 'query1')
from SweetProductionStream
select *
insert into LowProductionAlertStream;
I also put the following files in the < SI_TOOLING_HOME>/lib directory:
org.eclipse.paho.client.mqttv3-1.1.1.jar and siddhi-io-mqtt-3.0.2.jar
I also have a working mosquitto-broker at 192.168.100.82
When I try to execute the application, then in the console I get an error:
> 1.siddhi - Started Successfully!
[2022-11-09_23-21-40_302] ERROR {io.siddhi.core.stream.input.source.Source} - Error on '1'. no NetworkModule installed for scheme "tcp" of URI "tcp://192.168.100.82:1883" Error while connecting at Source 'mqtt' at 'SweetProductionStream'. (Encoded)
java.lang.IllegalArgumentException: no NetworkModule installed for scheme "tcp" of URI "tcp://192.168.100.82:1883"
at org.eclipse.paho.client.mqttv3.internal.NetworkModuleService.validateURI(NetworkModuleService.java:70)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.(MqttAsyncClient.java:454)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.(MqttAsyncClient.java:320)
at org.eclipse.paho.client.mqttv3.MqttAsyncClient.(MqttAsyncClient.java:315)
at org.eclipse.paho.client.mqttv3.MqttClient.(MqttClient.java:227)
at io.siddhi.extension.io.mqtt.source.MqttSource.connect(MqttSource.java:196)
at io.siddhi.core.stream.input.source.Source.connectWithRetry(Source.java:161)
at io.siddhi.core.SiddhiAppRuntimeImpl.startSources(SiddhiAppRuntimeImpl.java:535)
at io.siddhi.core.SiddhiAppRuntimeImpl.start(SiddhiAppRuntimeImpl.java:460)
at org.wso2.carbon.siddhi.editor.core.internal.DebugRuntime.start(DebugRuntime.java:93)
at org.wso2.carbon.siddhi.editor.core.internal.DebugProcessorService.start(DebugProcessorService.java:43)
at org.wso2.carbon.siddhi.editor.core.internal.EditorMicroservice.start(EditorMicroservice.java:795)
at org.wso2.carbon.siddhi.editor.core.internal.EditorMicroservice.startWithVariables(EditorMicroservice.java:815)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invokeResource(HttpMethodInfo.java:187)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invoke(HttpMethodInfo.java:143)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.dispatchMethod(MSF4JHttpConnectorListener.java:218)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.lambda$onMessage$58(MSF4JHttpConnectorListener.java:129)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener$$Lambda$312/404129248.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
What is the reason for the error?
I tried to change the versions of the org.eclipse.paho.client.mqttv3-1.1.1.jar and siddhi-io-mqtt-3.0.2.jar files.
This did not bring results.

Error instantiating snitch class 'org.apache.cassandra.locator.Ec2Snitch'

I'm having hard time setting up 2 node Cassandra cluster on Ec2 instances. This is 2.2.19 version. I cannot upgrade due to some other dependencies involved.
The Ec2 instances are in private subnet. Assigned static private ips
Here is my cassandra.yaml
cluster_name: 'Test-cluster'
data_file_directories:
- /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog
saved_caches_directory: /var/lib/cassandra/saved_caches
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
# Addresses of hosts that are deemed contact points.
# Cassandra nodes use this list of hosts to find each other and learn
# the topology of the ring. You must change this if you are running
# multiple nodes!
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
# seeds is actually a comma-delimited list of addresses.
# Ex: "<ip1>,<ip2>,<ip3>"
- seeds: "${private_ip}"
listen_address: ${private_ip}
start_native_transport: true
native_transport_port: 9042
storage_port: 7000
num_tokens: 32
ssl_storage_port: 9042
start_rpc: true
rpc_address: ${private_ip}
rpc_port: 9160
broadcast_rpc_address: ${private_ip}
endpoint_snitch: Ec2Snitch
partitioner: org.apache.cassandra.dht.RandomPartitioner
Here is my system.log
INFO [main] 2021-06-07 18:42:41,900 DatabaseDescriptor.java:327 - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO [main] 2021-06-07 18:42:42,022 DatabaseDescriptor.java:437 - Global memtable on-heap threshold is enabled at 251MB
INFO [main] 2021-06-07 18:42:42,023 DatabaseDescriptor.java:441 - Global memtable off-heap threshold is enabled at 251MB
ERROR [main] 2021-06-07 18:42:42,049 CassandraDaemon.java:787 - Exception encountered during startup
org.apache.cassandra.exceptions.ConfigurationException: Error instantiating snitch class 'org.apache.cassandra.locator.Ec2Snitch'.
at org.apache.cassandra.utils.FBUtilities.construct(FBUtilities.java:551) ~[apache-cassandra-2.2.19.jar:2.2.19]
at org.apache.cassandra.utils.FBUtilities.construct(FBUtilities.java:529) ~[apache-cassandra-2.2.19.jar:2.2.19]
at org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:741) ~[apache-cassandra-2.2.19.jar:2.2.19]
at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:465) ~[apache-cassandra-2.2.19.jar:2.2.19]
at org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:133) ~[apache-cassandra-2.2.19.jar:2.2.19]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:599) [apache-cassandra-2.2.19.jar:2.2.19]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:774) [apache-cassandra-2.2.19.jar:2.2.19]
Caused by: org.apache.cassandra.exceptions.ConfigurationException: Ec2Snitch was unable to execute the API call. Not an ec2 node?
at org.apache.cassandra.locator.Ec2Snitch.awsApiCall(Ec2Snitch.java:79) ~[apache-cassandra-2.2.19.jar:2.2.19]
at org.apache.cassandra.locator.Ec2Snitch.<init>(Ec2Snitch.java:55) ~[apache-cassandra-2.2.19.jar:2.2.19]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_282]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_282]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_282]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_282]
at java.lang.Class.newInstance(Class.java:442) ~[na:1.8.0_282]
at org.apache.cassandra.utils.FBUtilities.construct(FBUtilities.java:536) ~[apache-cassandra-2.2.19.jar:2.2.19]
Note: When I change snitch to SimpleSnitch it actually works.
Please help!!
Answering my own question
Ec2snitch uses IMDVs1 to get metadata http://169.254.169.254/latest/meta-data/placement/availability-zone to determine certain properties.
I created Ec2 instances through terraform where my code has
metadata_options {
http_endpoint = "enabled"
http_tokens = "enabled"
}
The above code forces to use imdsv2 only which is causing the issue. Ec2snitch couldn't get metadata by simple curl command.
Solution:
metadata_options {
http_endpoint = "enabled"
http_tokens = "optional"
}
If you are doing through console, when launching instance, make sure meta data version is set to V1 and V2

ORA error in wso2 apim-analytics server

1) I'm getting below error in wso2carbon logs when I try to configure wso2 apim-analytics(2.1) server with Oracle DB(12c version). I have tried with ojdbc6.jar and ojdbc7.jar in lib folder but still error is there.
error:
Caused by: java.lang.RuntimeException: ORA-28040: No matching authentication
protocol
2) Is there any REST api available for wso2 apim-analytics similar to DAS server to extract data?
full error:
ERROR
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} - Error while executing
the scheduled task for the script: APIM_LAST_ACCESS_TIME_SCRIPT
{org.wso2.carbon.analytics.spark.core.AnalyticsTask}
org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
Exception in executing query create temporary table APILastAccessSummaryData
using CarbonJDBC options (dataSource "WSO2AM_STATS_DB", tableName
"API_LAST_ACCESS_TIME_SUMMARY", schema "tenantDomain STRING ,
apiPublisher STRING , api STRING , version STRING , userId STRING ,
context STRING , max_request_time LONG ", primaryKeys
"tenantDomain,apiPublisher,api" )
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:764)
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:721)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
at
org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:60)
at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException:
ORA-28040: No matching authentication protocol
thanks,
Santosh
This was an issue identified in Oracle, and the workaround is : to set SQLNET.ALLOWED_LOGON_VERSION=8 in the $crs_home/network/admin/sqlnet.ora file. [1]
[1] https://community.softwaregrp.com/t5/UCMDB-and-UD-Practitioners-Forum/ORA-28040-No-matching-authentication-protocol/m-p/253403

connection was closed in ignite map reduce tasks

Environment:
ignite server:
centos6.5 with kernel 2.6.32-431.el6.x86_64
ignite version 1.9
hadoop version 2.6.2
3 server nodes with each having '-Xms16g -Xmx16g -server -XX:+AggressiveOpts -XX:MaxMetaspaceSize=256m' set when started
I run a map reduce test job with ignite map reduce. The job is simply getting the average number for each people. The data is like:
Jack 0.35
Tom 0.78
Lily 0.92
Jack 0.28
Tom 0.18
...
At first, I generated a data set of 100M lines. It's about 2.53GB. The job finished correctly in about 30s. Then I generated a data set of 1 Billion lines, about 25.3GB. The job always failed with exceptions. I tried several times but the same result.
The ignite server node threw exception below:
[15:06:56,804][ERROR][sys-#2740%null%][GridTcpRestProtocol] Failed to process client request [ses=GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=549 lim=549 cap=8192], super=AbstractNioClientWorker [selector=sun.nio.ch.EPollSelectorImpl#1cba0431, idx=3, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-3, gridName=null, finished=false, isCancelled=false, hashCode=906881587, interrupted=false, runner=grid-nio-worker-tcp-rest-3-#50%null%]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, super=GridNioSessionImpl [locAddr=/172.31.68.204:11211, rmtAddr=/172.31.68.202:39473, createTime=1493967985751, closeTime=1493968009502, bytesSent=2715, bytesRcvd=2641, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1493968016794, lastSndTime=1493967998303, lastRcvTime=1493968009502, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [jdkMarshaller=JdkMarshaller [], routerClient=false], directMode=false]], accepted=true]], msg=GridClientTaskRequest [taskName=o.a.i.i.processors.hadoop.proto.HadoopProtocolJobStatusTask, arg=HadoopProtocolTaskArguments []]]
class org.apache.ignite.IgniteCheckedException: Failed to send message (connection was closed): GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=549 lim=549 cap=8192], super=AbstractNioClientWorker [selector=sun.nio.ch.EPollSelectorImpl#1cba0431, idx=3, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-3, gridName=null, finished=false, isCancelled=false, hashCode=906881587, interrupted=false, runner=grid-nio-worker-tcp-rest-3-#50%null%]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, super=GridNioSessionImpl [locAddr=/172.31.68.204:11211, rmtAddr=/172.31.68.202:39473, createTime=1493967985751, closeTime=1493968009502, bytesSent=2715, bytesRcvd=2641, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1493968016794, lastSndTime=1493967998303, lastRcvTime=1493968009502, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [jdkMarshaller=JdkMarshaller [], routerClient=false], directMode=false]], accepted=true]]
at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7239)
at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:170)
at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:119)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1$1.apply(GridTcpRestNioListener.java:264)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1$1.apply(GridTcpRestNioListener.java:261)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:271)
at org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:228)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1.apply(GridTcpRestNioListener.java:261)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1.apply(GridTcpRestNioListener.java:229)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:271)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListeners(GridFutureAdapter.java:259)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:389)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:355)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:332)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2$1.apply(GridRestProcessor.java:158)
at org.apache.ignite.internal.processors.rest.GridRestProcessor$2$1.apply(GridRestProcessor.java:155)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:271)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListeners(GridFutureAdapter.java:259)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:389)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:355)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:332)
at org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:70)
at org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:30)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:271)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListeners(GridFutureAdapter.java:259)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:389)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:355)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:332)
at org.apache.ignite.internal.processors.rest.handlers.task.GridTaskCommandHandler$2.apply(GridTaskCommandHandler.java:294)
at org.apache.ignite.internal.processors.rest.handlers.task.GridTaskCommandHandler$2.apply(GridTaskCommandHandler.java:257)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:271)
at org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListeners(GridFutureAdapter.java:259)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:389)
at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:355)
at org.apache.ignite.internal.processors.task.GridTaskWorker.finishTask(GridTaskWorker.java:1579)
at org.apache.ignite.internal.processors.task.GridTaskWorker.finishTask(GridTaskWorker.java:1547)
at org.apache.ignite.internal.processors.task.GridTaskWorker.reduce(GridTaskWorker.java:1157)
at org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:942)
at org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:996)
at org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1221)
at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1222)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:850)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$2100(GridIoManager.java:108)
at org.apache.ignite.internal.managers.communication.GridIoManager$7.run(GridIoManager.java:790)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Failed to send message (connection was closed): GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=549 lim=549 cap=8192], super=AbstractNioClientWorker [selector=sun.nio.ch.EPollSelectorImpl#1cba0431, idx=3, bytesRcvd=0, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-rest-3, gridName=null, finished=false, isCancelled=false, hashCode=906881587, interrupted=false, runner=grid-nio-worker-tcp-rest-3-#50%null%]]], writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null, super=GridNioSessionImpl [locAddr=/172.31.68.204:11211, rmtAddr=/172.31.68.202:39473, createTime=1493967985751, closeTime=1493968009502, bytesSent=2715, bytesRcvd=2641, bytesSent0=0, bytesRcvd0=0, sndSchedTime=1493968016794, lastSndTime=1493967998303, lastRcvTime=1493968009502, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=GridTcpRestParser [jdkMarshaller=JdkMarshaller [], routerClient=false], directMode=false]], accepted=true]]
at org.apache.ignite.internal.util.nio.GridNioServer.send0(GridNioServer.java:554)
at org.apache.ignite.internal.util.nio.GridNioServer.send(GridNioServer.java:494)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onSessionWrite(GridNioServer.java:3036)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionWrite(GridNioFilterAdapter.java:118)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onSessionWrite(GridNioCodecFilter.java:94)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionWrite(GridNioFilterAdapter.java:118)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onSessionWrite(GridNioFilterChain.java:264)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onSessionWrite(GridNioFilterChain.java:189)
at org.apache.ignite.internal.util.nio.GridNioSessionImpl.send(GridNioSessionImpl.java:108)
at org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestNioListener$1.apply(GridTcpRestNioListener.java:258)
... 40 more
The job client threw exception below:
java.io.IOException: Failed to get job status: job_1fbf9083-9a44-4be9-9199-695a97652dc2_0002
at org.apache.ignite.internal.processors.hadoop.impl.proto.HadoopClientProtocol.getJobStatus(HadoopClientProtocol.java:197)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:326)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:323)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:323)
at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:611)
at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1357)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1318)
at com.tscloud.sdk.test.ignite.MRTest.run(MRTest.java:81)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at com.tscloud.sdk.test.ignite.MRTest.main(MRTest.java:53)
Caused by: class org.apache.ignite.internal.client.impl.connection.GridClientConnectionResetException: Failed to perform request (connection failed): /172.31.68.204:11211
at org.apache.ignite.internal.client.impl.connection.GridClientConnection.getCloseReasonAsException(GridClientConnection.java:491)
at org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection.close(GridClientNioTcpConnection.java:339)
at org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection.close(GridClientNioTcpConnection.java:299)
at org.apache.ignite.internal.client.impl.connection.GridClientConnectionManagerAdapter$NioListener.onDisconnected(GridClientConnectionManagerAdapter.java:630)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onSessionClosed(GridNioFilterChain.java:253)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionClosed(GridNioFilterAdapter.java:93)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onSessionClosed(GridNioCodecFilter.java:70)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionClosed(GridNioFilterAdapter.java:93)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onSessionClosed(GridNioServer.java:3005)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onSessionClosed(GridNioFilterChain.java:147)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.close(GridNioServer.java:2306)
at org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:929)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2026)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1863)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1568)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
The job configuration is below:
Configuration configuration = new Configuration();
configuration.set(MRConfig.FRAMEWORK_NAME, IgniteHadoopClientProtocolProvider.FRAMEWORK_NAME);
configuration.set(MRConfig.MASTER_ADDRESS, "172.31.68.202:11211");
configuration.set("fs.igfs.impl", "org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem");
configuration.set("fs.default.name", "igfs://igfs#172.31.68.202/");
I checked nodes status after the job failed using ignitevisorcmd.sh. All server nodes were OK, but there were sometime one of the node server was down. I did not know why it behaved like this.
Any help is appreciated.
Edit(2017-05-16):
I changed the hadoop core-site.xml and add hadoop.tmp.dir property as below
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/hadoop-2.6.2/tmp</value>
</property>
Then I reformatted hdfs and uploaded the 25.3GB data file. I run the test successfully. It turns out my hdfs has something wrong. Reformatting namenode solves the problem.
Before above steps, I tried checking the jvm heap usage by VisualVM.
One of the server node visualvm monitor snapshot
If your node was down temporarily, then it is reasonable that a job would fail-over to another node or fail altogether if there are no other nodes. I would check that your network was not reset or down. Also, you should check for the presence of any software firewalls between your nodes (it is best to disable operating system firewalls).

How to troubleshoot OpenJPA error - Attempt to commit a null javax.transaction.Transaction

How to troubleshoot following openjpa error.
"Attempt to commit a null javax.transaction.Transaction. Some application servers set the transaction to null if a rollback occurs."
I see a similar error here, but there was no any comments.
Full stack trace:
[2012-03-12 02:32:29,476] ERROR {org.apache.ode.bpel.engine.BpelEngineImpl} - Scheduled job failed; jobDetail=JobDetails( instanceId: null mexId: hqejbhcnphr7333k8shi7i processId: {http://ode/bpel/sampleprocess2}sampleProcess-1 type: INVOKE_INTERNAL channel: null correlatorId: null correlationKeySet: null retryCount: null inMem: false detailsExt: {enqueue=false}) {org.apache.ode.bpel.engine.BpelEngineImpl}
<openjpa-2.0.0-wso2v1-r52033:64539M nonfatal user error> org.apache.openjpa.persistence.InvalidStateException: Attempt to commit a null javax.transaction.Transaction. Some application servers set the transaction to null if a rollback occurs.
at org.apache.openjpa.kernel.BrokerImpl.setRollbackOnlyInternal(BrokerImpl.java:1595)
at org.apache.openjpa.kernel.BrokerImpl.setRollbackOnly(BrokerImpl.java:1581)
at org.apache.openjpa.kernel.DelegatingBroker.setRollbackOnly(DelegatingBroker.java:951)
at org.apache.openjpa.persistence.EntityManagerImpl.setRollbackOnly(EntityManagerImpl.java:605)
at org.apache.openjpa.persistence.PersistenceExceptions$2.translate(PersistenceExceptions.java:77)
at org.apache.openjpa.kernel.DelegatingQuery.translate(DelegatingQuery.java:99)
at org.apache.openjpa.kernel.DelegatingQuery.execute(DelegatingQuery.java:536)
at org.apache.openjpa.persistence.QueryImpl.execute(QueryImpl.java:288)
at org.apache.openjpa.persistence.QueryImpl.getResultList(QueryImpl.java:300)
at org.apache.ode.dao.jpa.ProcessDAOImpl.getCorrelator(ProcessDAOImpl.java:95)
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.select(BpelRuntimeContextImpl.java:306)
at org.apache.ode.bpel.runtime.PICK.run(PICK.java:105)
at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.ode.jacob.vpu.JacobVPU$JacobThreadImpl.run(JacobVPU.java:451)
at org.apache.ode.jacob.vpu.JacobVPU.execute(JacobVPU.java:139)
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.execute(BpelRuntimeContextImpl.java:879)
at org.apache.ode.bpel.engine.PartnerLinkMyRoleImpl.invokeNewInstance(PartnerLinkMyRoleImpl.java:205)
at org.apache.ode.bpel.engine.BpelProcess$1.invoke(BpelProcess.java:309)
at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java:250)
at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java:305)
at org.apache.ode.bpel.engine.BpelProcess.handleJobDetails(BpelProcess.java:458)
at org.apache.ode.bpel.engine.BpelEngineImpl.onScheduledJob(BpelEngineImpl.java:553)
at org.apache.ode.bpel.engine.BpelServerImpl.onScheduledJob(BpelServerImpl.java:445)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob$1.call(SimpleScheduler.java:537)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob$1.call(SimpleScheduler.java:531)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:284)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:239)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:531)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:515)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
[2012-03-12 02:32:29,477] ERROR {org.apache.ode.scheduler.simple.SimpleScheduler} - Error while processing a persisted job: Job hqejbhcnphr7333k8shi7j time: 2012-03-12 02:32:29 PDT transacted: true persisted: true details: JobDetails( instanceId: null mexId: hqejbhcnphr7333k8shi7i processId: {http://ode/bpel/sampleprocess2}sampleProcess-1 type: INVOKE_INTERNAL channel: null correlatorId: null correlationKeySet: null retryCount: null inMem: false detailsExt: {enqueue=false}) {org.apache.ode.scheduler.simple.SimpleScheduler}
java.lang.IllegalStateException: No transaction associated with current thread
at org.apache.geronimo.transaction.manager.TransactionManagerImpl.rollback(TransactionManagerImpl.java:247)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:297)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:239)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:531)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:515)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
[2012-03-12 02:32:29,683] ERROR {org.apache.ode.scheduler.simple.SimpleScheduler} - Error while processing job, retrying in 5s {org.apache.ode.scheduler.simple.SimpleScheduler}
OpenJPA properties
"openjpa.TransactionMode", "managed"
"openjpa.Log", "commons"
"openjpa.ManagedRuntime", new JpaTxMgrProvider(_tm)
"openjpa.ConnectionFactory", _ds
"openjpa.ConnectionFactoryMode", "managed"
"openjpa.jdbc.TransactionIsolation", "read-committed"
"openjpa.FlushBeforeQueries", "true"
I'm running a standalone server which uses Embedded Tomcat.
I'm not expecting a solution, but some pointers to troubleshoot the issue.
Thanks,
Waruna
i have the same case:
Caused by: <openjpa-2.2.2-r422266:1468616 nonfatal user error> org.apache.openjpa.persistence.InvalidStateException: Attempt to commit a null javax.transaction.Transaction. Some application servers set the transaction to null if a rollback occurs.
at org.apache.openjpa.kernel.BrokerImpl.setRollbackOnlyInternal(BrokerImpl.java:1664)
at org.apache.openjpa.kernel.BrokerImpl.setRollbackOnly(BrokerImpl.java:1650)
... 26 more
If your persistence.xml has a properties like this one:
<properties>
<property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema(foreignKeys=true)"/>
</properties>
then openjpa generates all tables from your jpa-classes and the table OPENJPA_SEQUENCE_TABLE (VERY IMPORTANT!!!).
Without the property "openjpa.jdbc.SynchronizeMappings" the table OPENJPA_SEQUENCE_TABLE will be not generated and you get some exceptions in the openjpa-log like this one:
fatal store error> org.apache.openjpa.util.StoreException: Table "OPENJPA_SEQUENCE_TABLE" not found; SQL statement:
SELECT SEQUENCE_VALUE FROM PUBLIC.OPENJPA_SEQUENCE_TABLE WHERE ID = ? FOR UPDATE [42102-174] {SELECT SEQUENCE_VALUE FROM PUBLIC.OPENJPA_SEQUENCE_TABLE WHERE ID = ? FOR UPDATE} [code=42102, state=42S02]
at org.apache.openjpa.jdbc.sql.DBDictionary.narrow(DBDictionary.java:4962)
at org.apache.openjpa.jdbc.sql.DBDictionary.newStoreException(DBDictionary.java:4922)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:136)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:110)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:62)
at org.apache.openjpa.jdbc.kernel.AbstractJDBCSeq.next(AbstractJDBCSeq.java:66)
Edit: try to add this table manually:
CREATE TABLE OPENJPA_SEQUENCE_TABLE
(
ID TINYINT PRIMARY KEY NOT NULL,
SEQUENCE_VALUE BIGINT
);