SAP Hana Vora : Unable to query vora tables - vora

I am trying to query the vora table select * from table_name but geting below error...i am using SAP Hana Vora 1.2...
Caused by:
sap.hanavora.jdbc.VoraException: HL(9): Runtime error. (schema error: schema "spark_velocity" does not exist (c++ exception)) at
sap.hanavora.jdbc.driver.HLMessage.buildException(HLMessage.java:88) at
sap.hanavora.jdbc.driver.tcp.TcpDriver.checkErrorsV2(TcpDriver.java:488) at
sap.hanavora.jdbc.driver.tcp.TcpDriver.sendMessageCheckAndReciveRespone(TcpDriver.java:520) at
sap.hanavora.jdbc.driver.tcp.TcpDriver.execute(TcpDriver.java:178) at
sap.hanavora.jdbc.HLStatement.executeHelper(HLStatement.java:37) at
sap.hanavora.jdbc.HLStatement.executeQuery(HLStatement.java:22) at
sap.hanavora.jdbc.AbstractHLStatement.execute(AbstractHLStatement.java:55) at
com.sap.spark.vora.client.jdbc.VoraJdbcClient.execute(VoraJdbcClient.scala:559) at
com.sap.spark.vora.client.jdbc.VoraJdbcClient.executeSetSchema(VoraJdbcClient.scala:537) at
com.sap.spark.vora.client.jdbc.VoraJdbcClient.liftedTree1$1(VoraJdbcClient.scala:143)
Could you please help me in clearing the vora catalogs.
Is there any way to clear vora catalog on SAP Hana Vora 1.2 ???? or to restart vora catalog on SAP Hana Vora 1.2???
Thanks, Akash

Related

Getting an error while trying to import data in Power BI using Spark in Azure HDInsight Connector

I am new to Power BI and Azure HDInsight. I am trying to import data in Power BI using the Spark connector, but I am getting the below error.
We couldn't import data from Spark on Azure HDInsight.
Make sure you are entering the information correctly.
The status code in the error message is 500.
Please provide any inputs.
Please let me know if you need more information.
Thanks,
Shreya Kaushik
The issue turned out to be that the Azure HDInsight cluster that we had created was of version 3.6 while the Power BI spark connector only supports version 3.5 currently.
Thanks all for your help!
Shreya Kaushik

Cannot Register Vora Tables in Spark

When attempting to register all tables in Vora with
vc.sql("REGISTER ALL TABLES USING com.sap.spark.vora")
I receive the following error
"The current Vora version does not support parallel loading of partitioned tables. Please wait until the previous partitioned tables are loaded, then issue your query again."
Is there a way of clearing all previous requests? Is there a way to clear the Vora Catalog outside of SQL command.
This error can occur in Vora 1.2 due to a program error with incorrect handling of partitioned tables. A workaround has been documented now on the Troubleshooting Blog. It is planned to address this issue with the next Vora version.
Deleting the vora-discovery and vora-dlog directories removed all metadata and we were able to recreate our tables.

Read Apache HIVE table from Informatica

I have need of regarding HIVE table using Informatica and then write the data after some transformations to MS SQL table.
Can anyone please let me know what is the driver / connector required to connect to Apache HIVE from Informatica. Is there any specific Informatica version from which this is supported?
Informatica Big Data Edition (BDE) supports Hive both as a source and target.
More information: BDE User Guide

Tables created in Zeppelin are not visible in JDBC connection view of VORA

We have created the table CUSTOMER through zeppelin interface of VORA. Now I want to access this CUSTOMER via SAP LUMIRA . After starting the thrift server , we logged into JDBC connector command prompt using beeline as suggested in developer guide.
When I connected Lumira to JDBC connector , under CATALOG_VIEW , I can not find my table named CUSTOMER.
I tried to register the table CUSTOMER again under JDBC connector prompt but its gave error message.
Is it not possible to access same VORA table created using zeppelin interface in JDBC connection ?
Error Screenshot
Which version of Vora are you using? Connectivity to Lumira is only supported as of Vora 1.1.
If you use Ambari to manage the Vora installation you can check the version at 'Admin' -> 'Stacks and Versions'.
Its working now.... I passed attribute "namenoderul" in register table command , and table is visible under CATALOG _VIEW in lumira.

Error viewing API Manager Statistics using WSO2 DAS

I'm attempting to use AM 1.9.1 and Store statistics in DAS 3.0.0. I'm using a mysql database to house my WSO2AM_STATS_DB instance.
Data is being stored successfully in the database. I have records indicating that attempts were throttled out and requests were made successfully. Unfortunately, when I attempt to view any of the statistics in either the store or the publisher application, the logs show this error:
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Expression #2 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'TempTable.apiPublisher' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by
Can anyone provide some guidance on how to resolve this?
I was able to resolve this issue by removing the ONLY_FULL_GROUP_BY from the configuration for MySQL.