I have configured DAS and API Manager as per the instructions given in the documentation. I have diff docker containers running DAS and one container each for running manager,worker,publisher and store. Although I see data in the Data Explorer in DAS UI, but in the publisher UI, I get a static HTML page. I see the following in the publisher logs. Any idea?
ERROR - usage:jag java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
This was a known issue in DAS 3.0.1, and fixed in DAS 3.1.0
This was being caused because the destination RDBMS tables were being
dropped by Spark during each INSERT call. This behavior has been fixed
in DAS 3.1.0.
Related
I am trying to setup a new Workflow manager & Service Bus Farm in my local machine. and in the stage of Create a new service bus farm getting error as "Unable to cast object of type 'System.DirectoryServices.AccountManagement.GroupPrincipal' to type 'System.DirectoryServices.AccountManagement.UserPrincipal'" .
Please suggest any resolution.
Specification :-
SQL Server 2012 Enterprise edition,
Workflow manager 1.0 refresh(CU2)+CU5,
Service Bus 1.1 CU1,
Windows fabric 1.0.970.0
Error Screenshot :-
You need to mark sure to run the Workflow Manager with a domain account.
A blog ("Farm Creation" section) lists this similar issue for your reference:
Fun With Installing Service Bus for Windows
Also there is a TechNet blog which lists tips for Successful Installation of Workflow Manager, you can have a look at it.
Getting this error when connecting Power BI with Azure Databricks through spark build in connector:-
Details: "ODBC: ERROR [HY000] [Microsoft][DriverSupport] (1170)
Unexpected response received from server. Please ensure the server
host and port specified for the connection are correct."
I have checked many times host and port of the databrick cluster , and also tried after restarting of cluster .
Guide for the connection:-
https://docs.azuredatabricks.net/user-guide/bi/power-bi.html
Got the same problem today. I followed these instructions and it worked.
The user was not able to import SQL data Power BI and getting this error, while testing connection in ODBC was successful.
It turned out that he has old credentials stored in PowerBI, and that caused identification issues. Purging cached data sources (Power BI: Home >Edit Queries > Data source settings" resolved the issue.
We are doing evaluation for metering purpose using WSO2 API Manager and DAS. (Latest versions)
Environment:
WSO2 API Manager runs as 2 node active-active deployment model using Hazlecast. (4 Core 8GB Ram) &
DAS runs as single node.
Both are connecting to backend RDBMS as mysql.
DAS and MYSQL shares the same server of 12 Core 24GB RAM. We dedicatedly allocated 12GB to MYSQL.
We started the test at the rate of 750reads/sec and everything went well for 27hrs until the metering reaches 72 Million and after which we have got the below error.
At API Manager: [PassThroughMessageProcessor-130] WARN DataPublisher Event queue is full, unable to process the event for endpoint Group.
At Das: (After 10 mins), we have got INFO {com.leansoft.bigqueue.page.MappedPageFactoryImpl} - Page file /$DAS_HOME/repository/data/index_staging_queues/4P/index/page-12.dat was just deleted. {com.leansoft.bigqueue.page.MappedPageFactoryImpl}.
Is this something that we have reached the limit w.r.t the infra setup or some performance issues w.r.t DAS. Can you please help us?
You need to tune the server performance of DAS and APIM
Can WSO2 CEP 4.1.0 work with Storm 0.10.0?
I get the following error, when try to use WSO2 CEP 4.1.0 with Storm 0.10.0 (Hortonworks HDP 2.4)
Can I apply a solution from https://community.hortonworks.com/questions/8916/submitting-a-topology-to-strom-cluster-required-fi.html to the WSO2 CEP? How I can?
INFO {org.wso2.carbon.event.processor.core.internal.storm.StormTopologyManager} - TopologySubmitterJob:97, Retrying to submit topology 'ExecutionPlan[-1234]' in 10000 ms
ERROR {org.wso2.carbon.event.processor.core.internal.storm.StormTopologyManager} - TopologySubmitterJob:97, Error connecting to storm when trying to check whether topology 'ExecutionPlan[-1234]' exist
org.wso2.carbon.event.processor.core.exception.ServerUnavailableException: Error connecting to storm when trying to check whether topology 'ExecutionPlan[-1234]' exist
at org.wso2.carbon.event.processor.core.internal.storm.StormTopologyManager$TopologySubmitter.isTopologyExist(StormTopologyManager.java:283)
at org.wso2.carbon.event.processor.core.internal.storm.StormTopologyManager$TopologySubmitter.run(StormTopologyManager.java:200)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift7.protocol.TProtocolException: **Required field 'nimbus_uptime_secs' is unset!** Struct:ClusterSummary(supervisors:[SupervisorSummary(host:m347hdp1.nova.otp.int, uptime_secs:4932482, num_workers:2, num_used_workers:0, supervisor_id:dd4256ff-b2cd-4873-bac6-d5488f505ca1), SupervisorSummary(host:m348hdp2.nova.otp.int, uptime_secs:4932472, num_workers:2, num_used_workers:0, supervisor_id:b50dab9a-44e4-4888-b370-72c0c7d7c16c)], nimbus_uptime_secs:0, topologies:[])
at backtype.storm.generated.ClusterSummary.validate(ClusterSummary.java:587)
at backtype.storm.generated.ClusterSummary.read(ClusterSummary.java:514)
at backtype.storm.generated.Nimbus$getClusterInfo_result.read(Nimbus.java:10665)
at org.apache.thrift7.TServiceClient.receiveBase(TServiceClient.java:78)
at backtype.storm.generated.Nimbus$Client.recv_getClusterInfo(Nimbus.java:468)
at backtype.storm.generated.Nimbus$Client.getClusterInfo(Nimbus.java:456)
at org.wso2.carbon.event.processor.core.internal.storm.StormTopologyManager$TopologySubmitter.isTopologyExist(StormTopologyManager.java:275)
... 2 more
While trying to follow the instructions from the wso2am (1.10.0) manual, regarding working with statistics with the wso2das (3.0.1) server i have encountered a problem.
If i choose to let the wso2am server define the stream while making the first call of the api, the wso2das server refuses to post statistics to the WSO2_STATS_DB.
If on the other hand i choose to import the analytics.car file in wso2das (as stated here ) i get an exception (AsyncDataPublisher Stream definition already exist) because the org.wso2.apimgt.statistics.request defined in the latest Analytics.car is different to the one being send from wso2am.
I pinpointed the problem in the definition of the Eventstream_request_1.0 in files
org.wso2.apimgt.statistics.request_1.0.0.json ,
throttledOutORG_WSO2_APIMGT_STATISTICS_REQUEST.xml
where the definition of the throttledOut option is missing
Is there a way to solve this issue?
Thank you.
I think your DAS is in some kind of a corrupted state. Can you first delete the car application (/repository/deployment/server/carbonapps) and then log in to DAS and go to Manage > Event > Streams and delete any existing streams. Then try again to deploy the car app in the /repository/deployment/server/carbonapps location.
If everything goes well you would see two scripts in Manage > Batch Analytics > Scripts section. Try to execute each script and see if there is any error. If not then you can point the API manager to DAS