WSO2AM with WSO2DAS - null apiPublisher for API_DESTINATION_SUMMARY - wso2

Connecting wso2am-2.0.0 and wso2am-analytics-2.0.0 on PGSQL (9.5) database (having common WSO2AM_STATS_DB database), we receive a following exception:
TID: [-1] [] ERROR {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} - Error in executing task: Error while saving dat
a to the table API_DESTINATION_SUMMARY : Job aborted due to stage failure: Task 0 in stage 54296.0 failed 1 times, most recent failure: Lost task 0.0
in stage 54296.0 (TID 50425, localhost): java.sql.BatchUpdateException: Batch entry 0 INSERT INTO API_DESTINATION_SUMMARY (api, version, apiPublisher,
context, destination, total_request_count, hostName, year, month, day, time) VALUES ('test01', 'v1.0.0', NULL, '/test/v1.0.0', 'http://demo6009762.mo
ckable.io', 1, 'wso2apimgr3', 2017, 1, 26, '2017-01-26 15:59') ON CONFLICT (api,version,apiPublisher,context,destination,hostName,year,month,day) DO U
PDATE SET total_request_count=EXCLUDED.total_request_count, time=EXCLUDED.time was aborted: ERROR: null value in column "apipublisher" violates not-nu
ll constraint
full exception is here.
According to the logs the direct cause is that the apipublisher field is null what should not happen.
So now I have a few questions:
How do I prevent that? How do I configure the apipublisher value?. And How do I get rid of the invalid data
Thank you for any hint

There is a reported issue for this. You can apply the fix mentioned in the jira ticket.

Related

How can a table from the Glue Catalog raise NullPointerException?

I'm currently trying to join two tables together in AWS Glue from a RDS instance and after successfully crawling the database structure I'm setting up my job with:
table1 = (
glueContext.create_dynamic_frame
.from_catalog(database="transact", table_name="transact_table1")
)
table1.printSchema()
print "Count: ", table1.count()
table2 = (
glueContext.create_dynamic_frame
.from_catalog(database="transact", table_name="transact_table2")
)
table2.printSchema()
print "Count: ", table2.count()
Strangely, the job fails with:
File "/mnt/yarn/usercache/root/appcache/application_1520463557704_0008/container_1520463557704_0008_01_000001/PyGlue.zip/awsglue/dynamicframe.py", line 275, in count
File "/mnt/yarn/usercache/root/appcache/application_1520463557704_0008/container_1520463557704_0008_01_000001/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/mnt/yarn/usercache/root/appcache/application_1520463557704_0008/container_1520463557704_0008_01_000001/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
File "/mnt/yarn/usercache/root/appcache/application_1520463557704_0008/container_1520463557704_0008_01_000001/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o64.count.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 4 times, most recent failure: Lost task 0.3 in stage 4.0 (TID 26, ip-10-1-2-4.ec2.internal, executor 1): java.lang.NullPointerException
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$13.apply(JdbcUtils.scala:427)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$13.apply(JdbcUtils.scala:425)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:286)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:268)
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:126)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
The curious part is that it only happens in the second count(), because I can see in the logs that it correctly prints out the table1 schema and count. And it also prints the table2 schema.
I did this because I was trying to Join.apply those dynamic frames together and it was failing with NullPointerException too. What gives? Am I missing a bit of configuration perhaps?
UPDATE 1:
It appears that it's a problem with that second table in particular. Picking some other table from the catalog to serve as table2 makes the Job succeed.
So I'll morph the question into: how must a table from the catalog be different from the others that makes it raise this error?
print "Count: ", table2count()
is it a typo error while posting the question to SO, if not please check the code ? Shouldn't it be table2.count() ?

Sitecore 8: The automation state for contact 46905fd6-25ee-4b91-a968-80cc29943e2f and plan 00000000-0000-0000-0000-000000000000 has not been found

I'm getting the following error message on Sitecore logs in one of our production environment.
MessageTaskRunner worker thread 3 06:40:35 ERROR EmailCampaign: Message sending error: Sitecore.Modules.EmailCampaign.Exceptions.EmailCampaignException: The automation state for contact 46905fd6-25ee-4b91-a968-80cc29943e2f and plan 00000000-0000-0000-0000-000000000000 has not been found.
at Sitecore.Modules.EmailCampaign.Core.Analytics.AutomationStatesManager.EnrollOrUpdateContact(Guid contactId, Guid planId, String stateName, EcmCustomValues customValues, String[] validStates)
at Sitecore.Modules.EmailCampaign.Core.Dispatch.DispatchManager.EnrollOrUpdateContact(Guid contactId, DispatchQueueItem dispatchQueueItem, Guid planId, String stateName, EcmCustomValues customValues)
at Sitecore.Modules.EmailCampaign.Core.Dispatch.DispatchTask.OnSendToNextRecipient()
ManagedPoolThread #2 06:40:35 INFO EmailCampaign: Dispatch Message (Unsubscribe Notification): Finished
ManagedPoolThread #2 06:40:35 INFO Job ended: Sending message (C354709A5AEC45AD939190E246A3CEA4) (units processed: )
ManagedPoolThread #16 06:40:35 ERROR Could not update index entry. Action: 'Saved', Item: '{7356F179-B8D2-4091-AC17-D65F02E4416D}'
Exception: System.IO.IOException
Message: read past EOF
Source: Lucene.Net
at Lucene.Net.Store.BufferedIndexInput.Refill()
at Lucene.Net.Store.BufferedIndexInput.ReadByte()
at Lucene.Net.Store.ChecksumIndexInput.ReadByte()
at Lucene.Net.Store.IndexInput.ReadInt()
at Lucene.Net.Index.SegmentInfos.Read(Directory directory, String segmentFileName)
at Lucene.Net.Index.IndexFileDeleter..ctor(Directory directory, IndexDeletionPolicy policy, SegmentInfos segmentInfos, StreamWriter infoStream, DocumentsWriter docWriter, HashSet`1 synced)
at Lucene.Net.Index.DirectoryReader.DoCommit(IDictionary`2 commitUserData)
at Lucene.Net.Index.IndexReader.Commit(IDictionary`2 commitUserData)
at Lucene.Net.Index.IndexReader.Commit()
at Lucene.Net.Index.IndexReader.DecRef()
at Lucene.Net.Index.IndexReader.Dispose(Boolean disposing)
at Sitecore.Search.Crawlers.DatabaseCrawler.DeleteItem(Item item)
at Sitecore.Search.Crawlers.DatabaseCrawler.UpdateItem(Item item)
at System.EventHandler.Invoke(Object sender, EventArgs e)
at Sitecore.Data.Managers.IndexingProvider.UpdateIndexPhaseOne(HistoryEntry entry, Database database)
I have rebuild the Sitecore analytics database also.
But still I'm getting this error.
I tried submitting forms on our site, and it is working fine. I'm getting the subscription mails and the subscription notifications.
Thanks.

Server Errors While Writing With Python Cassandra Driver

code=1000 [Unavailable exception] message="Cannot achieve consistency level ONE" info={'required_replicas': 1, 'alive_replicas': 0, 'consistency': 'ONE'}
code=1100 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
code=1200 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
I am inserting into Cassandra Cassandra 2.0.13(single node for testing) by python cassandra-driver version 2.6
The following are my keyspace and table definitions:
CREATE KEYSPACE test_keyspace WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': '1' };
CREATE TABLE test_table (
key text PRIMARY KEY,
column1 text,
...,
column17 text
) WITH COMPACT STORAGE AND
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'SnappyCompressor'};
What I tried:
1) multiprocessing(protocol version set to 1)
each process has its own cluster, session(default_timeout set to 30.0)
def get_cassandra_session():
"""creates cluster and gets the session base on key space"""
# be aware that session cannot be shared between threads/processes
# or it will raise OperationTimedOut Exception
if CLUSTER_HOST2:
cluster = cassandra.cluster.Cluster([CLUSTER_HOST1, CLUSTER_HOST2])
else:
# if only one address is available, we have to use older protocol version
cluster = cassandra.cluster.Cluster([CLUSTER_HOST1], protocol_version=1)
session = cluster.connect(KEY_SPACE)
session.default_timeout = 30.0
return session
2) batch insert (protocol version set to 2 because BatchStatement is enabled on Cassandra 2.X)
def batch_insert(session, batch_queue, batch):
try:
insert_user = session.prepare("INSERT INTO " + db.TABLE + " (" + db.COLUMN1 + "," + db.COLUMN2 + "," + db.COLUMN3 +
"," + db.COLUMN4 + ") VALUES (?,?,?,?)")
while batch_queue.qsize() > 0:
'''batch queue size is 1000'''
row_tuple = batch_queue.get()
batch.add(insert_user, row_tuple)
session.execute(batch)
except Exception as e:
logger.error("batch insert fail.... %s", e)
the above function is invoked by:
batch = BatchStatement(consistency_level=ConsistencyLevel.ONE)
batch_insert(session, batch_queue, batch)
tuples are stored in batch_queue.
3) synchronizing execution
Several days ago I post another question Cassandra update fails , cassandra was complaining about TimeOut issue. I was using synchronize execution for updating.
Can anyone help, is this my code issue or python cassandra-driver issue or Cassandra itself ?
Thanks a million!
If your question is about those errors at the top, those are server-side error responses.
The first says that the coordinator you contacted cannot satisfy the request at CL.ONE, with the nodes it believes are alive. This can happen if all replicas are down (more likely with a low replication factor).
The other two errors are timeouts, where the coordinator didn't get responses from 'live' nodes in a time configured in the cassandra.yaml.
All of these indicate that the cluster you're connected to is not healthy. This could be because it is overwhelmed (high GC pauses), or experiencing network issues. Check the server logs for clues.
I got the following error, which looks very similar:
cassandra.Unavailable: Error from server: code=1000 [Unavailable exception] message="Cannot achieve consistency level LOCAL_ONE" info={'consistency': 'LOCAL_ONE', 'alive_replicas': 0, 'required_replicas': 1}
When I added a sleep(0.5) in the code, it worked fine. I was trying to write too much too fast...

Tasks fail after BAM admin password change

After changing the default password for admin user in WSO2 BAM 4.1.0, tasks fail with the following error:
TID: [0] [BAM] [2013-06-20 16:56:15,464] ERROR {org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl} - Error while executing Hive script.
Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask {org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl}
java.sql.SQLException: Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:189)
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:355)
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:250)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
TID: [0] [BAM] [2013-06-20 16:56:15,467] ERROR {org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask} - Error while executing script : am_stats_analyzer_460 {org.wso2.carbon.analytics.hive.ta
sk.HiveScriptExecutorTask}
org.wso2.carbon.analytics.hive.exception.HiveExecutionException: Error while executing Hive script.Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 2 from org.apache.hado
op.hive.ql.exec.MapRedTask
at org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl.execute(HiveExecutorServiceImpl.java:117)
at org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask.execute(HiveScriptExecutorTask.java:60)
at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:56)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Reverting back the password to its original value solves the issue.
How do I change the password for the admin user and keep the tasks working?
Have you changed the username, password in the hive script am_stats_analyzer? The defaults are admin/admin, check the hive script and update the password accordingly. The property is as follows;
"cassandra.ks.username" = "admin",
"cassandra.ks.password" = "xxxxx",
Check if that fixes your issue.
In order to solve the issue I had to perform the following steps:
edit the file [BAM_HOME]/repository/conf/etc/cassandra-auth.xml and change the password value to the new password.
edit the file [BAM_HOME]/repository/conf/datasources/master-datasources.xml and change the password value of the WSO2BAM_CASSANDRA_DATASOURCE datasource to the new password.
restart the BAM: the Hive tasks now run without errors.
where the new password is the password I assigned to the admin user.
Moreover, the Main \ Manage \ Cassandra Keyspaces \ List page in the BAM UI, which was raising the following error, is now fixed:
org.wso2.carbon.cassandra.mgt.ui.CassandraAdminClientException: Error retrieving keyspace names !
(...)
Caused by: org.apache.axis2.AxisFault: InvalidRequestException(why:You have not logged in)
(...)
Sorry I couldn't follow up with the question earlier, anyway glad your problem is sorted now .! Keep on trying BAM and don't hesitate to holler if you run into any issues.
Thanks,
Shariq.

How to troubleshoot OpenJPA error - Attempt to commit a null javax.transaction.Transaction

How to troubleshoot following openjpa error.
"Attempt to commit a null javax.transaction.Transaction. Some application servers set the transaction to null if a rollback occurs."
I see a similar error here, but there was no any comments.
Full stack trace:
[2012-03-12 02:32:29,476] ERROR {org.apache.ode.bpel.engine.BpelEngineImpl} - Scheduled job failed; jobDetail=JobDetails( instanceId: null mexId: hqejbhcnphr7333k8shi7i processId: {http://ode/bpel/sampleprocess2}sampleProcess-1 type: INVOKE_INTERNAL channel: null correlatorId: null correlationKeySet: null retryCount: null inMem: false detailsExt: {enqueue=false}) {org.apache.ode.bpel.engine.BpelEngineImpl}
<openjpa-2.0.0-wso2v1-r52033:64539M nonfatal user error> org.apache.openjpa.persistence.InvalidStateException: Attempt to commit a null javax.transaction.Transaction. Some application servers set the transaction to null if a rollback occurs.
at org.apache.openjpa.kernel.BrokerImpl.setRollbackOnlyInternal(BrokerImpl.java:1595)
at org.apache.openjpa.kernel.BrokerImpl.setRollbackOnly(BrokerImpl.java:1581)
at org.apache.openjpa.kernel.DelegatingBroker.setRollbackOnly(DelegatingBroker.java:951)
at org.apache.openjpa.persistence.EntityManagerImpl.setRollbackOnly(EntityManagerImpl.java:605)
at org.apache.openjpa.persistence.PersistenceExceptions$2.translate(PersistenceExceptions.java:77)
at org.apache.openjpa.kernel.DelegatingQuery.translate(DelegatingQuery.java:99)
at org.apache.openjpa.kernel.DelegatingQuery.execute(DelegatingQuery.java:536)
at org.apache.openjpa.persistence.QueryImpl.execute(QueryImpl.java:288)
at org.apache.openjpa.persistence.QueryImpl.getResultList(QueryImpl.java:300)
at org.apache.ode.dao.jpa.ProcessDAOImpl.getCorrelator(ProcessDAOImpl.java:95)
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.select(BpelRuntimeContextImpl.java:306)
at org.apache.ode.bpel.runtime.PICK.run(PICK.java:105)
at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.ode.jacob.vpu.JacobVPU$JacobThreadImpl.run(JacobVPU.java:451)
at org.apache.ode.jacob.vpu.JacobVPU.execute(JacobVPU.java:139)
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.execute(BpelRuntimeContextImpl.java:879)
at org.apache.ode.bpel.engine.PartnerLinkMyRoleImpl.invokeNewInstance(PartnerLinkMyRoleImpl.java:205)
at org.apache.ode.bpel.engine.BpelProcess$1.invoke(BpelProcess.java:309)
at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java:250)
at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java:305)
at org.apache.ode.bpel.engine.BpelProcess.handleJobDetails(BpelProcess.java:458)
at org.apache.ode.bpel.engine.BpelEngineImpl.onScheduledJob(BpelEngineImpl.java:553)
at org.apache.ode.bpel.engine.BpelServerImpl.onScheduledJob(BpelServerImpl.java:445)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob$1.call(SimpleScheduler.java:537)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob$1.call(SimpleScheduler.java:531)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:284)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:239)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:531)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:515)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
[2012-03-12 02:32:29,477] ERROR {org.apache.ode.scheduler.simple.SimpleScheduler} - Error while processing a persisted job: Job hqejbhcnphr7333k8shi7j time: 2012-03-12 02:32:29 PDT transacted: true persisted: true details: JobDetails( instanceId: null mexId: hqejbhcnphr7333k8shi7i processId: {http://ode/bpel/sampleprocess2}sampleProcess-1 type: INVOKE_INTERNAL channel: null correlatorId: null correlationKeySet: null retryCount: null inMem: false detailsExt: {enqueue=false}) {org.apache.ode.scheduler.simple.SimpleScheduler}
java.lang.IllegalStateException: No transaction associated with current thread
at org.apache.geronimo.transaction.manager.TransactionManagerImpl.rollback(TransactionManagerImpl.java:247)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:297)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(SimpleScheduler.java:239)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:531)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleScheduler.java:515)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
[2012-03-12 02:32:29,683] ERROR {org.apache.ode.scheduler.simple.SimpleScheduler} - Error while processing job, retrying in 5s {org.apache.ode.scheduler.simple.SimpleScheduler}
OpenJPA properties
"openjpa.TransactionMode", "managed"
"openjpa.Log", "commons"
"openjpa.ManagedRuntime", new JpaTxMgrProvider(_tm)
"openjpa.ConnectionFactory", _ds
"openjpa.ConnectionFactoryMode", "managed"
"openjpa.jdbc.TransactionIsolation", "read-committed"
"openjpa.FlushBeforeQueries", "true"
I'm running a standalone server which uses Embedded Tomcat.
I'm not expecting a solution, but some pointers to troubleshoot the issue.
Thanks,
Waruna
i have the same case:
Caused by: <openjpa-2.2.2-r422266:1468616 nonfatal user error> org.apache.openjpa.persistence.InvalidStateException: Attempt to commit a null javax.transaction.Transaction. Some application servers set the transaction to null if a rollback occurs.
at org.apache.openjpa.kernel.BrokerImpl.setRollbackOnlyInternal(BrokerImpl.java:1664)
at org.apache.openjpa.kernel.BrokerImpl.setRollbackOnly(BrokerImpl.java:1650)
... 26 more
If your persistence.xml has a properties like this one:
<properties>
<property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema(foreignKeys=true)"/>
</properties>
then openjpa generates all tables from your jpa-classes and the table OPENJPA_SEQUENCE_TABLE (VERY IMPORTANT!!!).
Without the property "openjpa.jdbc.SynchronizeMappings" the table OPENJPA_SEQUENCE_TABLE will be not generated and you get some exceptions in the openjpa-log like this one:
fatal store error> org.apache.openjpa.util.StoreException: Table "OPENJPA_SEQUENCE_TABLE" not found; SQL statement:
SELECT SEQUENCE_VALUE FROM PUBLIC.OPENJPA_SEQUENCE_TABLE WHERE ID = ? FOR UPDATE [42102-174] {SELECT SEQUENCE_VALUE FROM PUBLIC.OPENJPA_SEQUENCE_TABLE WHERE ID = ? FOR UPDATE} [code=42102, state=42S02]
at org.apache.openjpa.jdbc.sql.DBDictionary.narrow(DBDictionary.java:4962)
at org.apache.openjpa.jdbc.sql.DBDictionary.newStoreException(DBDictionary.java:4922)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:136)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:110)
at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:62)
at org.apache.openjpa.jdbc.kernel.AbstractJDBCSeq.next(AbstractJDBCSeq.java:66)
Edit: try to add this table manually:
CREATE TABLE OPENJPA_SEQUENCE_TABLE
(
ID TINYINT PRIMARY KEY NOT NULL,
SEQUENCE_VALUE BIGINT
);