WSO2 DAS - Error in index data peekNext: Map failed - wso2

My scenario,
I have some transaction details in Mysql DB. I use WSO2 ESB server and push these data into WSO2 DAS server(persisted the data in H2 DB with primary key and index). The data are loaded to the DAS server successfully but the problem I face is, I see an ERROR in my DAS console continuously after every three seconds. The error is given below.
[2016-04-21 09:09:48,175] ERROR {org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer} - Error in p
rocessing index batch operations: Error in index data peekNext: Map failed
org.wso2.carbon.analytics.datasource.commons.exception.AnalyticsException: Error in index data peekNext: Map failed
at org.wso2.carbon.analytics.dataservice.core.indexing.LocalIndexDataStore$LocalIndexDataQueue.peekNext(LocalInd
exDataStore.java:287)
at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer.processLocalShardDataQueue(Analytics
DataIndexer.java:297)
at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer.processIndexOperations(AnalyticsData
Indexer.java:261)
at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer.access$200(AnalyticsDataIndexer.java
:141)
at org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataIndexer$IndexWorker.run(AnalyticsDataIndexer
.java:1865)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:888)
at com.leansoft.bigqueue.page.MappedPageFactoryImpl.acquirePage(MappedPageFactoryImpl.java:86)
at com.leansoft.bigqueue.BigArrayImpl.append(BigArrayImpl.java:325)
at com.leansoft.bigqueue.BigQueueImpl.enqueue(BigQueueImpl.java:92)
at org.wso2.carbon.analytics.dataservice.core.indexing.LocalIndexDataStore$LocalIndexDataQueue.peekNext(LocalInd
exDataStore.java:271)
... 7 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:885)
... 11 more
I am not sure why this is happening. Please help and thanks in advance.

This happens because you are using H2 which is a in memory database and uses memory mapped files. We don't recommended using H2 in production deployments.
This error usually comes while mapping a big file in memory e.g. trying to map a file greater than 1 or 2GB
You can also use -d64 and -XX:MaxDirectMemorySize JVM option to enable large direct buffers

Related

How to config datasource in WSO2 Stream Integrator 7.1.0 to auto reconnect mysql

I've tried to setup a datasource connect to MySQL in deployment.yaml of WSO2 SI so that I can use it in my Siddhi apps.
It worked fine as normal but when I restart MySQL, the datasource can not reconnect to DB so my Siddhi app get following errors.
How can I config datasource so it can auto reconnect after database restart.
Thank you,
Luong.
[2020-11-12 19:32:43,627] ERROR {io.siddhi.extension.io.cdc.source.polling.strategies.DefaultPollingStrategy} - Error occurred while processing records in table SweetProductionTable. {mode=polling, app=CDCWithPollingMode, stream=insertSweetProductionStream} java.sql.SQLException: Connection is closed
at com.zaxxer.hikari.pool.ProxyConnection$ClosedConnection.lambda$getClosedConnection$0(ProxyConnection.java:493)
at com.sun.proxy.$Proxy73.prepareStatement(Unknown Source)
at com.zaxxer.hikari.pool.ProxyConnection.prepareStatement(ProxyConnection.java:315)
at com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement(HikariProxyConnection.java)
at io.siddhi.extension.io.cdc.source.polling.strategies.DefaultPollingStrategy.printEvent(DefaultPollingStrategy.java:142)
at io.siddhi.extension.io.cdc.source.polling.strategies.DefaultPollingStrategy.poll(DefaultPollingStrategy.java:86)
at io.siddhi.extension.io.cdc.source.polling.CDCPoller.run(CDCPoller.java:202)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
This is a bug of siddhi-io-cdc.
A Siddhi app with CDC polling mode get errors when the source database restart.
These failures caused by the bug in DefaultPollingStrategy class, printEvent(connection) do not verify connection before using it, must getConnection() first.
At the end of the mysql connection URL can you add the following
&autoReconnect=true
For example:
jdbc:mysql://localhost:3306/wso2_api_stat_alt?useSSL=false&ampautoReconnect=true

Errors in API Analytics with API Manager

I am setting up WSO2 APIM with Analytics running on docker. I am getting the following error in the worker & no data is being published to the console. I'm using a MySQL database.
I am using the docker images # https://github.com/wso2/docker-apim/tree/v2.6.0.3/dockerfiles/centos
OpenJDK8U-jdk_x64_linux_hotspot_8u222b10
mysql-connector-java-5.1.47-bin.jar
[2019-10-05 04:58:27,208] ERROR {org.wso2.carbon.databridge.core.internal.queue.QueueWorker} - Dropping wrongly formatted event sent org.wso2.carbon.databridge.core.exception.EventConversionException: Error when converting org.wso2.metrics.stream.Gauge:1.0.0 of event bundle with events 4
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:188)
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.toEventList(ThriftEventConverter.java:90)
at org.wso2.carbon.databridge.core.internal.queue.QueueWorker.run(QueueWorker.java:72)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.databridge.core.exception.EventConversionException: No StreamDefinition for streamId org.wso2.metrics.stream.Gauge:1.0.0 present in cache
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:171)
... 7 more
This issue occurs when the particular stream, "org.wso2.metrics.stream.Gauge:1.0.0", hasn't deployed properly from the capp. Below steps can be followed to resolve this.
Remove the /tmp directory which is available in the APIM_ANALYTICS_HOME directory.
Restart the server.
If there is any issue observed while deploying the capp, please backup the capp and remove it from the carbon console. (You can find it under the fault apps section. This is to remove any cache data related to the capp).
Then follow steps 1 and 2 again.

How can this error about WSO2-am 2.2.0 database be fixed?

I am using wso2-am 2.2.0 with local database. I change h2 database to my local database and every table is created and works correctly. When this error happens system wasn't down and continued to work but I am still getting this error continuously.
Here is the error :
[2018-05-20 07:50:24,935] ERROR - JDBCReporter Error when reporting timers
com.microsoft.sqlserver.jdbc.SQLServerException: The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. Parameter 6 (""): The supplied value is not a valid instance of data type float. Check the source data for invalid values. An example of an invalid value is data of numeric type with scale greater than precision.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:259)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1547)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatementBatch(SQLServerPreparedStatement.java:2678)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtBatchExecCmd.doExecute(SQLServerPreparedStatement.java:2547)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7347)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2713)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:224)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:204)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:2460)
at sun.reflect.GeneratedMethodAccessor84.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tomcat.jdbc.pool.StatementFacade$StatementProxy.invoke(StatementFacade.java:114)
at com.sun.proxy.$Proxy19.executeBatch(Unknown Source)
at org.wso2.carbon.metrics.jdbc.reporter.JDBCReporter.reportTimers(JDBCReporter.java:389)
at org.wso2.carbon.metrics.jdbc.reporter.JDBCReporter.report(JDBCReporter.java:200)
at com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162)
at com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Looks like a metrics DB issue. If you're not using metrics, you can disable it in <APIM_HOME>/repository/conf/metrics.xml file.
Ref: https://docs.wso2.com/display/AM220/Enabling+Metrics+and+Storage+Types
Maybe you run into this bug:
https://support.microsoft.com/de-at/help/970519/you-receive-a-the-incoming-tabular-data-stream-tds-remote-procedure-ca
There are workarounds available on that page.

WSO2 ESB 4.8.1 Registry Resource Not found Error

I am deploying a car file from WSO2 developer studio(3.7.0). Everything was working fine unless my machine crashed due to some reason. When I restarted it and again start building the project ESB is malfunctioning. I am deploying the car file and it is being deployed successfully but some of the registry resources are not appearing in management view. And when i try to access them I received following error:
Error:
at org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:411)
at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:183)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.synapse.SynapseException: Error creating XSLT transformer using : Value {name ='null', keyValue ='gov:example/services/crm/v1/xslt/ConvertRequest.xslt'}
at org.apache.synapse.mediators.AbstractMediator.handleException(AbstractMediator.java:313)
at org.apache.synapse.mediators.transform.XSLTMediator.createTemplate(XSLTMediator.java:393)
at org.apache.synapse.mediators.transform.XSLTMediator.performXSLT(XSLTMediator.java:232)
at org.apache.synapse.mediators.transform.XSLTMediator.mediate(XSLTMediator.java:191)
... 20 more
Caused by: org.apache.synapse.SynapseException: Error while fetching the resource gov:example/services/crm/v1/xslt/ConvertRequest.xslt
at org.wso2.carbon.mediation.registry.WSO2Registry.handleException(WSO2Registry.java:709)
at org.wso2.carbon.mediation.registry.WSO2Registry.getResource(WSO2Registry.java:572)
at org.wso2.carbon.mediation.registry.WSO2Registry.lookup(WSO2Registry.java:145)
at org.apache.synapse.registry.AbstractRegistry.getResource(AbstractRegistry.java:66)
at org.apache.synapse.config.SynapseConfiguration.getEntry(SynapseConfiguration.java:761)
at org.apache.synapse.core.axis2.Axis2MessageContext.getEntry(Axis2MessageContext.java:265)
at org.apache.synapse.mediators.transform.XSLTMediator.createTemplate(XSLTMediator.java:383)
... 22 more
Caused by: org.wso2.carbon.registry.core.exceptions.RegistryException: A SQLException error has occurred when trying to close result set or prepared statement
at org.wso2.carbon.registry.core.jdbc.dao.JDBCResourceDAO.getContentStream(JDBCResourceDAO.java:563)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCResourceDAO.fillResourceContentWithNoUpdate(JDBCResourceDAO.java:1239)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCResourceDAO.fillResource(JDBCResourceDAO.java:271)
at org.wso2.carbon.registry.core.jdbc.Repository.get(Repository.java:195)
at org.wso2.carbon.registry.core.jdbc.handlers.filters.MediaTypeMatcher.handleGet(MediaTypeMatcher.java:130)
at org.wso2.carbon.registry.core.jdbc.handlers.HandlerManager.get(HandlerManager.java:2439)
at org.wso2.carbon.registry.core.jdbc.handlers.HandlerLifecycleManager.get(HandlerLifecycleManager.java:955)
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.get(EmbeddedRegistry.java:512)
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.get(CacheBackedRegistry.java:180)
at org.wso2.carbon.registry.core.session.UserRegistry.get(UserRegistry.java:524)
at org.wso2.carbon.mediation.registry.WSO2Registry.getResource(WSO2Registry.java:569)
... 27 more
Caused by: org.h2.jdbc.JdbcSQLException: File not found: "/home/omerkhalid/Documents/WSO2/wso2esb-4.8.1/repository/database/WSO2CARBON_DB.lobs.db/84.lobs.db/21670.t22.lob.db" [90124-140]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:327)
at org.h2.message.DbException.get(DbException.java:167)
at org.h2.message.DbException.get(DbException.java:144)
at org.h2.engine.Database.openFile(Database.java:443)
at org.h2.value.ValueLob.getInputStream(ValueLob.java:610)
at org.h2.jdbc.JdbcResultSet.getBinaryStream(JdbcResultSet.java:1020)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCResourceDAO.getContentStream(JDBCResourceDAO.java:553)
... 37 more
Note:
There is no problem with the car because same car file is working on 4.8.0 and another instance of 4.8.1. on another machine.
There is something wrong with ESB db because as u can see in the above error log:
Caused by: org.wso2.carbon.registry.core.exceptions.RegistryException: A SQLException error has occurred when trying to close result set or prepared statement
and this:
Caused by: org.h2.jdbc.JdbcSQLException: File not found: "/home/omerkhalid/Documents/WSO2/wso2esb-4.8.1/repository/database/WSO2CARBON_DB.lobs.db/84.lobs.db/21670.t22.lob.db" [90124-140]
So please if anybody knows how to fix these issues , please help me because I do not want to download a fresh instance of WSO2 ESB.
Seems you are using h2 databases which is not recommended for production. If you are just evaluating WSO2 products then that's fine. So I guess you don't have any real data there.
What you have to do is delete contents of $ESB_HOME/repository/database folder (don't delete the folder) and restart the server with -Dsetup option. That will solve all h2 issues.

Integrating wso2 BAM with external cassandra

I am trying to integrate external cassandra to BAM. I have changed cassandra-component.xml.
1) I want to know how keyspace are created on external cassandra because when I am running BAM,
I am getting the error Unknown keyspace EVENT_KS.
2) I am having the following error in my wso2 logs
TID: [0] [BAM] [2014-02-11 15:28:30,905] WARN {org.apache.hadoop.mapred.JobClient} - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. {org.apache.hadoop.mapred.JobClient}
TID: [0] [BAM] [2014-02-11 15:37:04,393] ERROR {org.apache.hadoop.hive.ql.exec.ExecDriver} - Job Submission failed with exception 'java.lang.RuntimeException(org.apache.thrift.transport.TTransportException)'
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException
at org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getRangeMap(ColumnFamilyInputFormat.java:297)
at org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(ColumnFamilyInputFormat.java:105)
at org.apache.hadoop.hive.cassandra.input.HiveCassandraStandardColumnInputFormat.getSplits(HiveCassandraStandardColumnInputFormat.java:291)
at org.apache.hadoop.hive.cassandra.input.HiveCassandraStandardColumnInputFormat.getSplits(HiveCassandraStandardColumnInputFormat.java:216)
at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:302)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:292)
at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:933)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:925)
at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:839)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:792)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1123)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:792)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:766)
at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:460)
at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:733)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.thrift.transport.TTransportException
EVENT_KS is created only after the very first event is published to BAM as I remember. If you try to access it before getting created errors may arise.
In BAM 2.4.0, EVENT_KS is getting created when you run the BAM for the first time. (But in previous versions EVENT_KS will be created when the very first event is published to BAM). Please make sure your cassandra-component.xml looks similar to something like below. Also tell us about the cassandra version you are using.
<Cassandra><Cluster>
<Name>Test Cluster</Name>
<DefaultPort>9160</DefaultPort>
<Nodes>localhost:9160</Nodes>
<AutoDiscovery disable="false" delay="1000"/>
</Cluster></Cassandra>
First You need to check the following:
Have you pointed the cassandra-component.xml correctly to the external cassandra. With this your published data will be stored in the intended external cassandra database.
Have you installed a toolbox with intended stream definition inside? Or Else have you triggered to publish the data to BAM? In both cases the EVENT_KS will be created with the column family with the name of stream.
Have you modified the $BAM_HOME/repository/conf/datasource/master-datasource.xml to point to external cassandra databse? You need to validate the cassandra database configuration provided in WSO2BAM_CASSANDRA_DATASOURCE datasource. For the default toolboxes, this is the default cassandra datasource being used and by default it points to localhost. If you are using this in your hive script you need to change this configuration.
After putting many efforts i figure that after changing data directory of cassendra.yaml of external cassandra to repository/database/cassandra/data everything works fine with external cassandra.Not to mention with version 1.1.3. I want to know is there any other work around for this external cassandra configuration.