Can't create JPA tables into derby with glassfish3 - jpa-2.0

I am trying to create clone database(s) with JPA (2.0) into derby from a legacy database. The JPA entities were generated from an existing database (by JDidea tools I think), and now I am using Eclipse + GlassFish3 + EclipseLink + Derby + JDK 6 to clone that database for secure development (and testing).
However I am getting into glassfish log these messages:
INFO: EclipseLink, version: Eclipse Persistence Services - 2.3.2.v20111125-r10461
INFO: file:/C:/siilo/glassfish3/glassfish/domains/domain1/eclipseApps/omaasiointi-web/WEB-INF/classes/PU_NAME login successful
WARNING: Multiple [2] JMX MBeanServer instances exist, we will use the server at index [0] : [com.sun.enterprise.v3.admin.DynamicInterceptor#22beebcd].
WARNING: JMX MBeanServer in use: [com.sun.enterprise.v3.admin.DynamicInterceptor#22beebcd] from index [0]
WARNING: JMX MBeanServer in use: [com.sun.jmx.mbeanserver.JmxMBeanServer#62ccf439] from index [1]
WARNING: PER01000: Got SQLException executing statement "CREATE TABLE "postikoodi" ("lrivino" BIGINT NOT NULL, "szkielikoodi" VARCHAR(255), "szkuntakoodi" VARCHAR(255), "szpostinumero" VARCHAR(255), "szpostitoimipaikka" VARCHAR(255), "sztimestamp" VARCHAR(255), "szuserid" VARCHAR(255), PRIMARY KEY ("lrivino"))": java.sql.SQLException: Table/View 'postikoodi' already exists in Schema 'APP'.
...
And there is a lot of these messages. There is multiple databases used, which are identical and the EclipseLink assumably tries to create all the tables in all the databases (for what reason I suppose I get enormous amounts of these errors => they seem to be about the same).
I created couple of database connections in Eclipse and checked that the 'APP' schema was empty in these, so I do not quite understand the error.
In persistence.xml I have the following defined:
<persistence-unit name="PU_NAME" transaction-type="JTA">
<jta-data-source>jdbc/__default</jta-data-source>
<properties>
<property name="eclipselink.target-database" value="org.eclipse.persistence.platform.database.DerbyPlatform"/>
<property name="eclipselink.ddl-generation" value="drop-and-create-tables"/>
<property name="eclipselink.ddl-generation.output-mode" value="database"/>
</properties>
</persistence-unit>
Any advice?

Related

WSO2 IS - POST_DELETE_USER error while deleting user from IS

We have installed WSO2AM 2.6.0 with IS as KM (5.7). We deployed AM as an active-active all in one instance and IS as KM active-active too following all the directives written on the Official documentation.
Based on the documentation, we created the following databases with their respectives datasources: regdb (registry), carbondb, userdb (user store), mb-store, apimdb.
The issue that we have now is on IS side. We tried several things to check that everything was working correctly, like create users, check registry acces etc. We created a user called "test", chaged some properties, etc and after that, we proceed to delete the user.
When we deleted the user we get the following popup on the IS console:
Checking the logs we find the following:
Caused by: org.postgresql.util.PSQLException: ERROR: relation "cm_receipt" does not exist
Position: 135
TID: [-1234] [] [2020-05-11 09:00:30,062] ERROR {org.wso2.carbon.user.mgt.ui.UserAdminClient} - Error when handling event : POST_DELETE_USER
org.wso2.carbon.user.mgt.stub.UserAdminUserAdminException: UserAdminUserAdminException
We checked on the database and the user was deleted correctly and IS carbon console is not displaying it any more, so the user was correctly deleted. Checking a little bit more, the Delete user process is trying to access table "cm_receipt" on carbondb, but the table exists on apimdb.
On postgres side, we have this log during the delete:
<2020-05-08 11:49:50.452 -03:172.19.35.21(45740):wso2carbon#carbondb:[12476]:>ERROR: relation "cm_receipt" does not exist at character 135
<2020-05-08 11:49:50.452 -03:172.19.35.21(45740):wso2carbon#carbondb:[12476]:>STATEMENT: SELECT R.CONSENT_RECEIPT_ID, R.LANGUAGE, R.PII_PRINCIPAL_ID, R.PRINCIPAL_TENANT_ID, R.STATE,RS.SP_DISPLAY_NAME,RS.SP_DESCRIPTION FROM CM_RECEIPT R INNER JOIN CM_RECEIPT_SP_ASSOC RS ON R.CONSENT_RECEIPT_ID=RS.CONSENT_RECEIPT_ID WHERE PII_PRINCIPAL_ID LIKE $1 AND PRINCIPAL_TENANT_ID =$2 AND SP_NAME LIKE $3 AND STATE LIKE $4 ORDER BY ID ASC LIMIT $5 OFFSET $6
Have you got any idea why it can be happening? There is some bug related or something?
Thanks!
There could be two reasons for this.
You've forgot to execute the D script which contains the consent management tables. /wso2is-5.7.0/dbscripts/consent/postgresql.sql.
Your wso2is-5.7.0/repository/conf/consent-mgt-config.xml configuration file is referring to the wrong datasource.
Solution
Check what's the datasource that the consent-mgt-config.xml file is referring to. By default it's like this.
<ConsentManager xmlns="http://wso2.org/carbon/consent/management" xmlns:svns="http://org.wso2.securevault/configuration">
<DataSource>
<!-- Include a data source name (jndiConfigName) from the set of data sources defined in master-datasources
.xml -->
<Name>jdbc/WSO2IdentityDB</Name>
</DataSource>
Here, it's the jdbc/WSO2IdentityDB. Then go to your wso2is-5.7.0/repository/conf/datasources/master-datasource.xml file and check the database of that datasource. If the mentioned tables are not created in that database you can execute the above mentioned postgre.sql script in that database. (If you've already created these tables in a different datasource, you might want to change the datasource defined in the consent-mgt-config.xml file.)
P.S. Never use -Dsetup argument for automatic executions of database scripts on the startup. Always manually execute the database scripts against the database.
P.S. The reason for the user deletion success is that this user consent removal process being a POST_USER_DELETION event. A failure in a POST handler won't effect the action itself.

Errors when upgrade WSO2 IS as KM and APIs are not migrated

I tried to upgrade WSO2 IS as KM from 5.6.0 to 5.7.0 and API Manager from 2.5.0 to 2.6.0 in corresponding with instructions:
https://docs.wso2.com/display/AM260/Upgrading+from+the+Previous+Release+when+WSO2+IS+is+the+Key+Manager
https://docs.wso2.com/display/IS570/Upgrading+from+the+Previous+Release
https://docs.wso2.com/display/AM260/Upgrading+from+the+Previous+Release
In instructions there is no mention that i need to import SQl script again to DB apimgt from /dbscripts/apimgt/mysql.sql, cause 5.7.0 IS as KM has more tables in that DB than 5.6.0 version.
During upgrade of IS i have errors in logs:
ERROR {org.wso2.carbon.is.migration.service.SchemaMigrator} - Error occurred while executing SQL script for migrating database
java.lang.Exception: Error occurred while executing : CREATE INDEX IDX_RID ON IDN_UMA_RESOURCE (RESOURCE_ID)
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Duplicate key name 'IDX_RID'
ERROR {org.wso2.carbon.is.migration.service.SchemaMigrator} - Error occurred while executing SQL script for migrating database
java.lang.Exception: Error occurred while executing : CREATE INDEX IDX_SP_TEMPLATE ON SP_TEMPLATE (TENANT_ID, NAME)
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Duplicate key name 'IDX_SP_TEMPLATE'
ERROR {org.wso2.carbon.is.migration.service.SchemaMigrator} - Error occurred while executing SQL script for migrating database
java.lang.Exception: Error occurred while executing : ALTER TABLE CM_PURPOSE ADD COLUMN PURPOSE_GROUP VARCHAR(255) NOT NULL, ADD COLUMN GROUP_TYPE VARCHAR(255) NOT NULL, DROP KEY NAME, ADD UNIQUE KEY (NAME, TENANT_ID, PURPOSE_GROUP, GROUP_TYPE)
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Duplicate column name 'PURPOSE_GROUP'
INFO {org.wso2.carbon.is.migration.service.v570.migrator.OAuthDataMigrator} - WSO2 Product Migration Service Task : Migration starting on OAuth2 access token table.
java.lang.OutOfMemoryError: GC overhead limit exceeded
Dumping heap to /usr/lib64/wso2/wso2is-km-5.7.0/repository/logs/heap-dump.hprof ...
Unable to create /usr/lib64/wso2/wso2is-km-5.7.0/repository/logs/heap-dump.hprof: File exists
ERROR {org.wso2.carbon.is.migration.MigrationClientImpl} - Migration process was stopped.
java.lang.OutOfMemoryError: GC overhead limit exceeded
Despite on errors IS is launched, after i launched API-M without errors, but my APIs are not migrated.
I use MariaDB 5.5.
What is the problem with IS and why APIs dont migrated?
"So i shouldn't do Steps from 2.1 to 2.4, yes? But what with errors that i have during upgrade of IS?"
That means, didn't you do the steps 2.1 to 2.4 in https://docs.wso2.com/display/AM260/Upgrading+from+the+Previous+Release#c130adc044364015ae336f584909e3ac?
If you have already followed the steps mentioned in https://docs.wso2.com/display/AM260/Upgrading+from+the+Previous+Release+when+WSO2+IS+is+the+Key+Manager#250, then you need to skip only the 2.4 step in https://docs.wso2.com/display/AM260/Upgrading+from+the+Previous+Release#c130adc044364015ae336f584909e3ac.
Also please make sure that you have given the versions correctly in the migration-config.yaml file as below.
migrationEnable: "true"
currentVersion: "5.6.0"
migrateVersion: "5.7.0"
Note: You do not need to create tables manually.

wso2 is and esb UTF8

Error installing IS
I am installing wso2 is MySql.
MySQL is set to UTF-8. These are the errors:
1 :
CREATE INDEX REG_PATH_IND_BY_PATH_VALUE USING HASH ON REG_PATH(REG_PATH_VALUE, REG_TENANT_ID);
Error
MySQL Database Error: Table '#sql-197b_412' uses an extension that doesn't exist in this MySQL version
2 :
CREATE TABLE IDN_OAUTH2_AUTHORIZATION_CODE (
AUTHORIZATION_CODE VARCHAR(255),
CONSUMER_KEY VARCHAR(255),
CALLBACK_URL VARCHAR(1024),
SCOPE VARCHAR(2048),
AUTHZ_USER VARCHAR(512),
TIME_CREATED TIMESTAMP,
VALIDITY_PERIOD BIGINT,
PRIMARY KEY (AUTHORIZATION_CODE),
FOREIGN KEY (CONSUMER_KEY) REFERENCES IDN_OAUTH_CONSUMER_APPS(CONSUMER_KEY) ON DELETE CASCADE
)TABLESPACE tb_regdb engine ndb storage disk;
Error:
MySQL Database Error: Got error 851 'Maximum 8052 bytes of FIXED columns supported, use varchar or COLUMN_FORMAT DYNAMIC instead' from NDBCLUSTER
wso2 can be installed on a database in UTF8?
regards
I assume that you are using a MySQL cluster with NDB engine. Therefore you need to run the MySQL script which can be found inside the <IS_HOME>/dbscripts/mysql_cluster.sql
In addition Identity Server also has identity scripts under <IS_HOME>/dbscripts/identity/ and IS_HOME>/dbscripts/identity/application-mgt/. It have not shipped the identity script for mysql_cluster with NDB engine.
But you can modify the existing mysql.sql script under the <IS_HOME>/dbscripts/identity/ and IS_HOME>/dbscripts/identity/application-mgt/ and make them work for MySQL cluster. In dbscripts/identity/mysql.sql and dbscripts/identity/application-mgt/mysql.sql ENGINE INNODB clauses can be replaced with ENGINE NDB. The value of configuration option MaxNoOfTriggers had to be increased to avoid 4239 NDB errors.

Hibernate running on separate JVM fail to read

I am implementing WebService with Hibernate to write/read data into database (MySQL). One big issue I have was when I insert data (e.g., USER table) via one JVM (example: JUNit test or directly from DBUI suite) successfully, my WebService's Hibernate running on separate JVM cannot find this new data. They all point to the same DB server. It is only if I had destroyed the WebService's Hibernate SessionFactory and recreate it, then the WebService's Hibernate layer can read the new inserted data. In contrast, the same JUnit test or a direct query from DBUI suite can find the inserted data.
Any assistance is appreciated.
This issue is resolved today with the following:
I changed our Hibernate config file (hibernate.cfg.xml) to have Isolation Level to at least "2" (READ COMMITTED). This immediately resolved the issue above. To understand further about this isolation level setting, please refer to these:
Hibernate reading function shows old data
Transaction isolation levels relation with locks on table
I ensured I did not use 2nd level caching by setting CacheMode to IGNORE for each of my Session object:
Session session = getSessionFactory().openSession();
session.setCacheMode(CacheMode.IGNORE);
Reference only: Some folks did the following in hibernate.cfg.xml to disable their 2nd level caching in their apps (BUT I didn't need to):
<property name="cache.provider_class">org.hibernate.cache.internal.NoCacheProvider</property>
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">false</property>

How to log SQL values sent to my DB using EclipseLink?

I use EclipseLink as my JPA2 persistence layer, and i would like to see the values sent to DB in logs.
I already see SQL queries (using <property name="eclipselink.logging.level" value="ALL" /> in my persistence.xml), but, for example in an SQSL INSERT, I do not see the values, only the placeholders ?
So, how to see what values are sent
You'll need to use a JDBC proxy driver like p6spy or log4jdbc to get the SQL statements issued with their values instead of the placeholders. This approach works well you are using a EclipseLink with a connection pool whose URL is derived from persistence.xml (where you can specify a JDBC URL recognized by the proxy driver instead of the actual), but may not be so useful in a Java EE environment (atleast for log4jdbc), unless you can get the JNDI data sources to use the proxy driver.