Issue in writing from SAP HANA Vora to SAP HANA - vora

I am trying to write data from HANA Vora to HANA.
I used the code as in HANA academy , https://github.com/saphanaacademy/Vora/blob/master/Vora_Writing2HANA.txt
However I get error , when I execute the below line :
hana_datardd.write.format("com.sap.spark.hana").mode(SaveMode.Overwrite).options(HANA_LOAD_OPTIONS).save()
The error says “error: not found: value SaveMode “
Below is the screen shot of the error.
Hana Vora writing to HANA error
Please can me guide me on writing data from HANA Vora to HANA.
Thanks in advance.

You need to import SaveMode
import org.apache.spark.sql.SaveMode

Related

How to directly query SAP HANA via AWS Glue Notebook

I'm utilizing the NGDBC driver (SAP HANA JDBC driver) with an AWS Glue Notebook. I'm using the following line once I include the JAR file to access data from SAP HANA in our environment.
df = glueContext.read.format("jdbc").option("driver", jdbc_driver_name).option("url", db_url).option("dbtable", "KNA1").option("user", db_username).option("password", db_password).load()
In this example, it simply download the KNA1 table, but I have yet to see any documentation that tells me how to actually query the SAP HANA instance through these options. I attempted to use a "query" option, but that didn't seem like it was available via the JAR.
Am I to understand that I have to simply get entire tables, then query against the DataFrame? That seems expensive and not what I want to do. Maybe someone can provide some insight.
Try like this:
df = glueContext.read.format("jdbc").option("driver", jdbc_driver_name).option("url", db_url).option("dbtable", "(select name1 from kna1 where kunnr='1111') as name").option("user", db_username).option("password", db_password).load()
i.e. wrap the query into asterisks and provide an alias as help suggests.

How to connect to Amazon Athena using Simba ODBC

I am attempting to connect to Athena from RStudio using DBI::dbConnect, and I am having issues with opening the driver.
con <- DBI::dbConnect(
odbc::odbc(),
Driver = "[Simba Athena ODBC Driver]",
S3OutputLocation = "[s3://bucket-folder/]",
AwsRegion = "[region]",
AuthenticationType = "IAM Credentials",
Schema = "[schema]",
UID = rstudioapi::askForPassword("AWS Access Key"),
PWD = rstudioapi::askForPassword("AWS Secret Key"))
Error: nanodbc/nanodbc.cpp:983: 00000: [unixODBC][Driver Manager]Can't open lib '[Simba Athena ODBC Driver]' : file not found
In addition, this code returns nothing.
sort((unique(odbcListDrivers()[[1]])))
character(0)
It appears that my ODBC driver is unaccessible or incorrectly installed, but I am having trouble understanding why. I have downloaded the driver and can see it in my library.
Any insight is greatly appreciated!
The function arguments look strange. Remove the [] from Driver, S3OutputLocation and AwsRegion.
I solved by validating the list of driver that R recognizes using odbc::odbcListDrivers(), then adjusting the name of the Driver argument accordingly. If R can't definitely identify the driver, setting ODBCSYSINI=/folder_that_contains_odbcinst.ini/ in .Renviron solved for me.

Issue connecting to Databricks table from Azure Data Factory using the Spark odbc connector

​We have managed to get a valid connection from Azure Data Factory towards our Azure Databricks cluster using the Spark (odbc) connector. In the list of tables we do get the expected list, but when querying a specific table we get an exception.
ERROR [HY000] [Microsoft][Hardy] (35) Error from server: error code:
'0' error message:
'com.databricks.backend.daemon.data.common.InvalidMountException:
Error while using path xxxx for resolving path xxxx within mount at
'/mnt/xxxx'.'.. Activity ID:050ac7b5-3e3f-4c8f-bcd1-106b158231f3
In our case the Databrick tables and mounted parquet files stored in Azure Data Lake 2, this is related to the above exception. Any suggestions how to solve this issue?
Ps. the same error appaers when connectin from Power BI desktop.
Thanks
Bart
In your configuration to mount the lake can you add this setting:
"fs.azure.createRemoteFileSystemDuringInitialization": "true"
I haven't tried your exact scenario - however this solved a similar problem for me using Databricks-Connect.

How to access HANA VIEW from SAP VORA

I am trying to access Hana VIEW from SAP VORA using zeppelin but getting an error "No schema provided for non-existing table!".
I can't find any information about it. If anyone knows anything regarding that, that would be grateful.
Querying views in HANA via the HANA datasource is supported. The path-option takes either a table name or a view name. I just tested it and it works for me both on the Spark-shell and in Zeppelin (with both Vora1.0 and Vora1.1). Are your view name in Vora and HANA identical?
Here the Zeppelin code I used. 'TESTVIEW' is a view in my HANA system.
%vora
CREATE TEMPORARY TABLE sparktestview
USING com.sap.spark.hana
OPTIONS (
path "TESTVIEW",
host "myhost",
dbschema "SPARKTEST",
user "myuser",
passwd "mypwd",
instance "00"
)
%vora
select * from sparktestview

INFORMATICA LOADING ERROR

I am newbie to informatica.
I am using INFORMATICA 9.1.0 and oracle 11g as source and target database.
I tried to create one table in target database and tried to load data from source to target.
Table is getting created in target database. and I created mapping and workflow which is valid and i start work flow but it gave me following error.
Message Code RR_4036
Message
Error connecting to database [ Arun
ORA-00900: invalid SQL statement
Database driver error...
Function Name : executeDirect
SQL Stmt : Arun
Oracle Fatal Error
Database driver error...
Function Name : ExecuteDirect
Oracle Fatal Error
].
please help me with good solutions.
I got solution for this.
Previously while creating remote connection in Relational Connection Editor for a session, In code page option i chose "UTF-8 encoding of unicode". Now i changed to "ms windows latin 1 (ansi) superset of latin1" and I restarted the workflow which is succeeded.
The following video link shows how to create relational connection for a session.
http://www.youtube.com/watch?v=oM2d-IHfRUw