I am trying to access Hana VIEW from SAP VORA using zeppelin but getting an error "No schema provided for non-existing table!".
I can't find any information about it. If anyone knows anything regarding that, that would be grateful.
Querying views in HANA via the HANA datasource is supported. The path-option takes either a table name or a view name. I just tested it and it works for me both on the Spark-shell and in Zeppelin (with both Vora1.0 and Vora1.1). Are your view name in Vora and HANA identical?
Here the Zeppelin code I used. 'TESTVIEW' is a view in my HANA system.
%vora
CREATE TEMPORARY TABLE sparktestview
USING com.sap.spark.hana
OPTIONS (
path "TESTVIEW",
host "myhost",
dbschema "SPARKTEST",
user "myuser",
passwd "mypwd",
instance "00"
)
%vora
select * from sparktestview
Related
I'm utilizing the NGDBC driver (SAP HANA JDBC driver) with an AWS Glue Notebook. I'm using the following line once I include the JAR file to access data from SAP HANA in our environment.
df = glueContext.read.format("jdbc").option("driver", jdbc_driver_name).option("url", db_url).option("dbtable", "KNA1").option("user", db_username).option("password", db_password).load()
In this example, it simply download the KNA1 table, but I have yet to see any documentation that tells me how to actually query the SAP HANA instance through these options. I attempted to use a "query" option, but that didn't seem like it was available via the JAR.
Am I to understand that I have to simply get entire tables, then query against the DataFrame? That seems expensive and not what I want to do. Maybe someone can provide some insight.
Try like this:
df = glueContext.read.format("jdbc").option("driver", jdbc_driver_name).option("url", db_url).option("dbtable", "(select name1 from kna1 where kunnr='1111') as name").option("user", db_username).option("password", db_password).load()
i.e. wrap the query into asterisks and provide an alias as help suggests.
I Know that query reuse feature was recently added in aws athena.
The aws webconsole confirmed this operation.
But I cant' use this operation in dbeaver.
I tried to changed jdbc url parameter like (jdbc:athena?param1=val1..) but It didn't working.
the parameter is
enableResultReuse=1
ageforResultReuse=60
has anyone solved this problem ??
reference) https://aws.amazon.com/ko/blogs/big-data/reduce-cost-and-improve-query-performance-with-amazon-athena-query-result-reuse/
Try adding both parameters to Athena Driver properties (in connection settings) in DBeaver:
When I go to the IOT Core Registry page (on the GCP console) and select a device, I can edit it. There's a "Device metadata" section there, reading the following:
You can set custom metadata, such as manufacturer, location, etc. for the device. These
can be used to query devices in this registry. Learn more
Where the documentation page shows nothing about querying devices using metadata.
Is this possible at all? What can be done using device metadata?
I am asking because I am looking for the following features with Azure IOT Hub has with device twin tags:
Ideally I would like to enrich messages the device sends (state, events) with corresponding metadata.
Querying for multiple devices based on a metadata field.
One first has to add device meta data, before one can query it:
https://cloud.google.com/iot/docs/how-tos/devices#creating_a_device
https://cloud.google.com/iot/docs/how-tos/devices#getting_device_details
One can query gcloud iot deviceslist (--registry=REGISTRY : --region=REGION):
--filter="metadata.items.key['test_metadata'][value]='test_value'"
See gcloud topic filters for more information about filter expressions.
Or with format: --format='value[](metadata.items.test_metadata)'
It might be easier if you implement this using client libraries. Using the suggestion of #MartinZeitler list your devices then perform get for each device and then do the checking on the metadata. See Python code below for the implementation:
from google.cloud import iot_v1
def sample_list_devices(meta_key_name,meta_val_name):
# Create a client
client = iot_v1.DeviceManagerClient()
project_id="your-project-id"
location="asia-east1" #define your device location
registry="your-registry-id"
parent=f"projects/{project_id}/locations/{location}/registries/{registry}"
# Initialize request argument(s)
list_request = iot_v1.ListDevicesRequest(
parent=parent,
)
# Make the request
list_result = client.list_devices(request=list_request)
# Handle the response
for response in list_result:
device=response.num_id
get_request = iot_v1.GetDeviceRequest(
name=f"{parent}/devices/{device}",
)
get_response = client.get_device(request=get_request)
if get_response.metadata[meta_key_name]==meta_val_name:
print(get_response)
return get_response
#define metadata key and metadata value that you want to use for filtering
sample_list_devices(meta_key_name="test_key",meta_val_name="test_val")
Filtered response:
See device configuration:
No, it is not possible to query metadata the way you want it. The doc says the following about the server.
"Cloud IoT Core does not interpret or index device metadata."
As you are already aware as a client-side workaround to simulate a query search, we can list all the devices first and then filter the output by metadata.
I am using JDBC to connect to Athena for a specific Workgroup. But it is by default redirecting to the primary workgroup
Below is the code snippet
Properties info = new Properties();
info.put("user", "access-key");
info.put("password", "secrect-access-key");
info.put("WorkGroup","test");
info.put("schema", "testschema");
info.put("s3_staging_dir", "s3://bucket/athena/temp");
info.put("aws_credentials_provider_class","com.amazonaws.auth.DefaultAWSCredentialsProviderChain");
Class.forName("com.simba.athena.jdbc.Driver");
Connection connection = DriverManager.getConnection("jdbc:awsathena://athena.<region>.amazonaws.com:443/", info);
As you can see I am using "Workgroup" as the key for the properties. I also tried "workgroup", "work-group", "WorkGroup". It is not able to redirect to the specified Workgroup. Always going to the default one i.e primary workgroup.
Kindly help. Thanks
If you look at the release notes of Athena JDBC, the workgroup support is from v2.0.7.
If you jar is below this version, it will not work. Try to upgrade the library to 2.0.7 or above
You need to Override Client-Side Settings in workgroup.Enable below setting and rerun the query via JDBC.
Check this doc for more information.
Solved
Found the problem, I had a schema defined in my java class
I have a cloud foundry app which uses a mysql data service.
It works great but I want to add another database table.
When I re-deploy to cloud foundry with the new entity class it does not create the table and the log has the following error
2012-08-12 20:42:23,699 [main] ERROR org.hibernate.tool.hbm2ddl.SchemaUpdate - CREATE command denied to user 'ulPKtgaPXgdtl'#'172.30.49.146' for table 'acl_class'
Schema is created dynamically via the service. All you need to do is bind the application to the service and use the cloud namespace. As you mentioned above, removing the schema name from your configuration file will resolve the issue.