Distributed WSO2 APIM: Problems with KeyManager - wso2

Now I am testing the API-Manager doing a distributed install of pruduct.
When I start the Analytitcs and publisher (both in ditributed hosts), the analytic's Log donĀ“t stop to show the error messages:
[2018-04-12 15:00:18,770] ERROR {org.wso2.carbon.databridge.core.internal.queue.QueueWorker} - Dropping wrongly formatted event sent for -1234
org.wso2.carbon.databridge.core.exception.EventConversionException: Error when converting loganalyzer:1.0.0 of event bundle with events 1
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:181)
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.toEventList(ThriftEventConverter.java:90)
at org.wso2.carbon.databridge.core.internal.queue.QueueWorker.run(QueueWorker.java:73)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.databridge.core.exception.EventConversionException: No StreamDefinition for streamId loganalyzer:1.0.0 present in cache
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:166)
... 7 more

This means the APIM (or other product) is sending events with streamId loganalyzer:1.0.0 , however the analytics server has no such stream definition.
The analytics server is effectively a WSO2 DAS with preconfigured streams and analytics related to some other product. The log messages indicates, that the analytics application (org_wso2_carbon_analytics_apim-1.0.0.car) is not (yet) deployed.
It happens commonly when you start up the analytics server, it receives the product (APIM) events before the analytics app is deployed. Once the analytics app is deployed, the DAS should stop logging these messages
So in your case I'd try to have a look on the analytics server in the start of the log file why the analytics application is not properly deployed

Related

'Read Time Out' between wso2 APIM and APIM-Analytics

Canario:
APIM and APIM-Analytics (both in 2.6.0) at the same localhost machine.
Identity Server in other Machine
Use the doc to make configuration between APIM and Analytics.
Setup te Datasources for external Oracle DB instance:
IS strat Ok, Analytics Worker start ok, Analytics Dashboard Start Ok, Analytics Manager Start Ok
After default configuration, Apim start with connection issue:
...
ERROR{org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker} -
Error while trying to connect to the endpoint. Cannot borrow client for
ssl://localhost:7712.
{org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker}
org.wso2.carbon.databridge.agent.exception.DataEndpointLoginException:
Cannot borrow client for ssl://localhost:7712.
at
org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:134)
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.run(DataEndpointConnectionWorker.java:59)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.databridge.agent.exception.DataEndpointLoginException: Error while trying to login to the data receiver.
at org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftDataEndpoint.login(ThriftDataEndpoint.java:54)
at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:128)
... 6 more
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:161)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:65)
at org.wso2.carbon.databridge.commons.thrift.service.secure.ThriftSecureEventTransmissionService$Client.send_connect(ThriftSecureEventTransmissionService.java:104)
at org.wso2.carbon.databridge.commons.thrift.service.secure.ThriftSecureEventTransmissionService$Client.connect(ThriftSecureEventTransmissionService.java:95)
at org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftDataEndpoint.login(ThriftDataEndpoint.java:47)
... 7 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:750)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:159)
... 11 more
...
When I had access the consoles Analytics (PUBLISHER, STORE or ADMIN), the API Usage analtyics interface become stucked.
I try to mak few changes inside api-manager.xml. Now the ANALYTICS part is lik follows:
<!-- Enable Analytics for API Manager -->
<Enabled>true</Enabled>
<StreamProcessorServerURL>{tcp://localhost:7612}</StreamProcessorServerURL>
<!--StreamProcessorAuthServerURL>{ssl://localhost:7712}</StreamProcessorAuthServerURL-->
<!-- Administrator username to login to the remote StreamProcessor server. -->
<StreamProcessorUsername>admin</StreamProcessorUsername>
<!-- Administrator password to login to the remote StreamProcessor server. -->
<StreamProcessorPassword>admin</StreamProcessorPassword>
<!-- For APIM implemented Statistic client for RDBMS -->
<StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRestClientImpl</StatsProviderImpl>
<!-- StreamProcessor REST API configuration -->
<StreamProcessorRestApiURL>https://localhost:9444</StreamProcessorRestApiURL>
<StreamProcessorRestApiUsername>admin</StreamProcessorRestApiUsername>
<StreamProcessorRestApiPassword>admin</StreamProcessorRestApiPassword>
I expect to discovery why this is happen, If i follow de default documentation (https://docs.wso2.com/display/AM260/Configuring+APIM+Analytics)
thanks
This problem it was solved by import the Analytics certificate to wso2carbon.jks and client-truststore.jks.
In the beginning I just import to client-truststore.jks and miss wso2carbon.jks.
It's important to use the full qualified name to create the new certificates and keystore, and use to make correct link between tools at api-manager.xml
Remember to add the full qualified name at hosts file.
Thanks

WSO2 EI Analytics Profile Database configuration

I'm configuring WSO2 EI Analytics profile to use PostgreSQL instead of H2 database.
I have changed the following files:
analytics-datasources.xml,
master-datasources.xml,
metrics-datasources.xml
in \wso2\analytics\conf\datasources.
I have, also, executed the scripts to create the database in dbscripts. The scripts generate only tables for metrics and master, but they do not create tables for analytics.
Anyway when I run the analytics server i have some errors as shown below:
Failed to perform Category Drilldown on table: org_wso2_esb_analytics_stream_MediatorStatPerMinute: Error while connecting to the remote service. Connection refused (Connection refused) {JAGGERY.controllers.apis.eianalytics:jag}
TID: [-1234] [] [2017-11-06 16:43:00,262] ERROR {org.wso2.carbon.databridge.core.internal.queue.QueueWorker} - Dropping wrongly formatted event sent for -1234 {org.wso2.carbon.databridge.core.internal.queue.QueueWorker}
org.wso2.carbon.databridge.core.exception.EventConversionException: Error when converting org.wso2.esb.analytics.stream.FlowEntry:1.0.0 of event bundle with events 1
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:181)
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.toEventList(ThriftEventConverter.java:90)
at org.wso2.carbon.databridge.core.internal.queue.QueueWorker.run(QueueWorker.java:73)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.wso2.carbon.databridge.core.exception.EventConversionException: No StreamDefinition for streamId org.wso2.esb.analytics.stream.FlowEntry:1.0.0 present in cache
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:166)
... 7 more
It seems they are missing some database tables, but i don't know how to create them.
These errors are not present when i use H2 database with the default configuration.
Anyone can help me?
I solved the problem.
It was a JDBC driver problem.
With JDK 1.8 it is necessary to use the PostgreSQL JDBC 42.1.4.
I hope it will be useful for someone.

API usages are not recorded when integrating WSO2 APIM1.10.0 cluster with WSO2 DAS 3.0.1 cluster

I am using WSO2 Kubernetes Artifacts to build WSO2 APIM 1.10.0 cluster.
Here is my configuration :
api-key-manager.yaml
api-publisher.yaml
api-store.yaml
gateway-manager.yaml
With the above configurations, APIM cluster works fine on my kubernetes environment. Then I want to get statistics from WSO2 DAS 3.0.1. Here is my steps.
Open admin-dashboard page.
Fill in DAS information.
Save configuration.
Publish the sample API and subscribe it.
Invoke the created API.
Though API returns the correct result, I can not see any statistics from DAS page. Table ORG_WSO2_APIMGT_STATISTICS_REQUEST is also empty. Moreover, there are some exceptions in gateway container as follows:
2017-02-02T10:17:05.119378825Z [2017-02-02 10:17:05,118] ERROR - APIMgtUsageHandler Cannot publish event. null
2017-02-02T10:17:05.119410635Z java.lang.NullPointerException
2017-02-02T10:17:05.119416221Z at org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataBridgeDataPublisher.publishEvent(APIMgtUsageDataBridgeDataPublisher.java:124)
2017-02-02T10:17:05.119421345Z at org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageHandler.handleRequest(APIMgtUsageHandler.java:169)
2017-02-02T10:17:05.119425422Z at org.apache.synapse.rest.API.process(API.java:322)
2017-02-02T10:17:05.119429269Z at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:86)
2017-02-02T10:17:05.119432713Z at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:65)
2017-02-02T10:17:05.119444539Z at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:295)
2017-02-02T10:17:05.119448051Z at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:83)
2017-02-02T10:17:05.119451190Z at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
2017-02-02T10:17:05.119454693Z at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:317)
2017-02-02T10:17:05.119457708Z at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:149)
2017-02-02T10:17:05.119460675Z at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
2017-02-02T10:17:05.119463755Z at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2017-02-02T10:17:05.119466748Z at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2017-02-02T10:17:05.119470008Z at java.lang.Thread.run(Thread.java:745)
2017-02-02T10:17:05.292159023Z [2017-02-02 10:17:05,291] ERROR - APIMgtResponseHandler Cannot publish response event. null
2017-02-02T10:17:05.292186860Z java.lang.NullPointerException
2017-02-02T10:17:05.292191607Z at org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataBridgeDataPublisher.publishEvent(APIMgtUsageDataBridgeDataPublisher.java:140)
2017-02-02T10:17:05.292196079Z at org.wso2.carbon.apimgt.usage.publisher.APIMgtResponseHandler.mediate(APIMgtResponseHandler.java:211)
2017-02-02T10:17:05.292199487Z at org.apache.synapse.mediators.ext.ClassMediator.mediate(ClassMediator.java:84)
2017-02-02T10:17:05.292202823Z at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:81)
2017-02-02T10:17:05.292206246Z at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:48)
2017-02-02T10:17:05.292210195Z at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:155)
2017-02-02T10:17:05.292213976Z at org.apache.synapse.rest.Resource.process(Resource.java:297)
2017-02-02T10:17:05.292216990Z at org.apache.synapse.rest.API.process(API.java:335)
2017-02-02T10:17:05.292220203Z at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:86)
2017-02-02T10:17:05.292223430Z at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:52)
2017-02-02T10:17:05.292226576Z at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:295)
2017-02-02T10:17:05.292229762Z at org.apache.synapse.core.axis2.SynapseCallbackReceiver.handleMessage(SynapseCallbackReceiver.java:529)
2017-02-02T10:17:05.292232861Z at org.apache.synapse.core.axis2.SynapseCallbackReceiver.receive(SynapseCallbackReceiver.java:172)
2017-02-02T10:17:05.292236007Z at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
2017-02-02T10:17:05.292238952Z at org.apache.synapse.transport.passthru.ClientWorker.run(ClientWorker.java:251)
2017-02-02T10:17:05.292252632Z at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
2017-02-02T10:17:05.292256191Z at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2017-02-02T10:17:05.292259335Z at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2017-02-02T10:17:05.292262507Z at java.lang.Thread.run(Thread.java:745)
The problem may be same as this issue, but I don't see solution.
Edit 1
I also did 2 experiments as follow.
First:
Create a ubuntu pod.
Install WSO2 APIM 1.10.0 on the ubuntu pod container.
Open the admin-dashboard page and fill in DAS information.
Publish the sample API and subscribe it.
Invoke the created API.
Which works fine. I can see the statistics from DAS page.
Second :
Jump into APIM container.
Using telnet to verify thrift port of DAS cluster.
The thrift port was accessible for APIM cluster.
According to the exception, I think that might be caused by configurations missing in gateway container?

WSO2 DAS server configuration issue - Dropping wrongly formatted event sent for -1234

I have configured DAS with API manager server using REST client, but not able to push data to DAS server. Please see error logs in DAS server. Could you please help me to understand what is wring in configuration?
TID: [-1234] [] [2016-05-20 18:07:05,566] ERROR {org.wso2.carbon.databridge.core.internal.queue.QueueWorker} - Dropping wrongly formatted event sent for -1234 {org.wso2.carbon.databridge.core.internal.queue.QueueWorker}
org.wso2.carbon.databridge.core.exception.EventConversionException: Error when converting org.wso2.apimgt.statistics.throttle:1.0.0 of event bundle with events 1
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:181)
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.toEventList(ThriftEventConverter.java:90)
at org.wso2.carbon.databridge.core.internal.queue.QueueWorker.run(QueueWorker.java:73)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.wso2.carbon.databridge.core.exception.EventConversionException: No StreamDefinition for streamId org.wso2.apimgt.statistics.throttle:1.0.0 present in cache
at org.wso2.carbon.databridge.receiver.thrift.converter.ThriftEventConverter.createEventList(ThriftEventConverter.java:166)
... 7 more
Can you try redeploying the car app again. For that first do following
Delete the .car application from /repository/deployment/server/carbonapps
Delete any existing streams defs (related to APIM stats) by login to DAS management console and going to Manage > Event > Streams
Re deploy car app by putting it in /repository/deployment/server/carbonapps
If everything goes well you would see two scripts in Manage > Batch Analytics > Scripts section. Try to execute each script and see if there is any error.

Errors using input-only web service (OUT_ONLY from ESB)

I have a webservice with some input only operations. In the ESB i've created a proxy and sets the properties OUT_ONLY and FORCE_SC_ACCEPTED to true. Everytime I call the proxied operation I get the following error message in the wso2carbon.log:
TID: [0] [ESB] [2015-04-02 09:52:45,307] ERROR {org.apache.axis2.transport.base.threads.NativeWorkerPool} - Uncaught exception {org.apache.axis2.transport.base.threads.NativeWorkerPool}
java.lang.UnsupportedOperationException: Not yet implemented
at org.apache.axis2.description.OutOnlyAxisOperation.getMessage(OutOnlyAxisOperation.java:124)
at org.wso2.carbon.core.multitenancy.MultitenantMessageReceiver.processResponse(MultitenantMessageReceiver.java:125)
at org.wso2.carbon.core.multitenancy.MultitenantMessageReceiver.receive(MultitenantMessageReceiver.java:81)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
at org.apache.synapse.transport.passthru.ClientWorker.run(ClientWorker.java:225)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Althought everything seems to work OK, I am worried about this message. What am I doing wrong. These input only will be called very frequently in production, so I'd like them to be error free.
WSO2 ESB: 4.8.1
Thanks,
Danny
this exception will occur if OUT_ONLY=true and your backend sending a response back to the esb.if OUT_ONLY is set true, your are getting a response from the backend then it is not a valid scenarion for if OUT_ONLY property.check this post[1]
1.https://mohanadarshan.wordpress.com/2013/05/05/out_only-scenario-in-proxy-service-wso2-esb/
Out-only property is set to inform that this service does not return a response back. For instance if you are sending messages to a message broker. Force-sc-accepted flag causes ESB to send HTTP Accepted status response back to the client (which calls ESB) since otherwise client will timeout without a reaponse. So please make sure your backend service does not send a response and it is accessible to ESB.
Solved this issue for now: My ESB was running in multi-tenant mode. The proxy service were created in the tenant. I did a fresh install and put the config in (so no tenants). The error disappears immediately. When I remove the config and create a tenant and put the config into the tenant the error reappears. So might this be a bug. I can try to verify with running sample 253 (OneWayProxy) in a tenant.