must stop the "Spark Thrift Server" if want to use Vora Thriftserver? - vora

we are validating SAP HANA Vora 1.2 with IBM BigInsights now.
When we finished install Vora, Start of Thriftserver fails with error "Could not create ServerSocket", I've found the technote http://scn.sap.com/blogs/vora/2015/12/09/sap-hana-vora--troubleshooting#11, seems it's a known issue in Vora 1.0,1.1. Did Vora 1.2 also have the same problem?
The differences from my log to your technote is the Port is 10002, which used by SparkSubmit.
The work around is to stop "Spark Thrift Server" and then start "Vora Thriftserver".
I guess it's a port conflict problem and seems the 10002 port is used by "Spark Thrift Server"(which is started by default in IBM BI, and create ServerSocket on the port 10002), note that there is no parameter to configure the port in Vora Thriftserver Service, the 10002 port is configured in Spark service, modify the value of port but do not stop "Spark Thrift Server" could not resolve the issue.
The problem is there is no step/note said that we must stop the "Spark Thrift Server" if want to use Vora Thriftserver in SAP Vora docs, so I am not sure whether my work around is right/reasonable, Please help confirm whether Vora works like that or there is something wrong with our installation/configuration. Thanks!

Related

Cloud Composer worker fails to connect to external database

I am attempting to take my existing cloud composer environment and connect to a remote SQL database (Azure SQL). I've been banging at my head at this for a few days and I'm hoping someone can point out where my problem lies.
Following the documentation found here I've spun up a GKE Service and SQL Proxy workload. I then created a new airflow connection as show here using the full name of the service azure-sqlproxy-service:
I test run one of my DAG tasks and get the following:
Unable to connect: Adaptive Server is unavailable or does not exist
Not sure on the issue I decide to remote directly into one of the workers, whitelist that IP on the remote DB firewall, and try to connect to the server. With no command line MSSQL client installed I launch python on the worker and attempt to connect to the database with the following:
connection = pymssql.connect(host='database.url.net',user='sa',password='password',database='database')
From which I get the same error above with both the Service and the remote IP entered in as host. Even ignoring the service/proxy shouldn't this airflow worker be able to reach the remote database? I can ping websites but checking the remote logs the DB doesn't show any failed logins. With the generic error and not many ideas on what to do next I'm stuck. A few google results have suggested switching libraries but I'm not quite sure how, or if I even need to, within airflow.
What troubleshooting steps could I take next to get at least a single worker communicating to the DB before moving on the the service/proxy?
After much pain I've found that Cloud composer uses ubuntu 1804 which currently breaks pymssql as per here:
https://github.com/pymssql/pymssql/issues/687
I tried downgrading to 2.1.4 to no success. Needing to get this done I've followed the instructions outlined in this post to use pyodbc.
Google Composer- How do I install Microsoft SQL Server ODBC drivers on environments

Cosmos SDK remote connection refused

I am new to Cosmos SDK and I just forked the official Cosmos SDK nameservice tutorial. It works well on my local machine so I just deployed it to cloud server and I want to access it thru nscli from my local machine.
First I configured nscli to set the node to remote server address on my local machine
nscli config node tcp://{{my remote server ip here}}:26657
Then I tried to run the following query
nscli query account $(nscli keys show jack -a)
Finally I got some error like ERROR: ABCIQuery: Post failed: Post "{{my remote server ip}}:26657": dial tcp : connect: connection refused
I am curios since I don't think there is any network related problem in my case. Did I mis-configured something?
Thank you very much!
Best,
Min
My config file is configured to listen on 127.0.0.1:26657, which should be set to 0.0.0.0:26657 instead. So the connection succeeded if I start the daemon node like tendermint node --rpc.laddr=tcp://0.0.0.0:26657

connecting PBI to impala

I created a cloudera cluster (ENTERPRISE DATA HUB) on azure. I can use the DNSname:7180 to view and manage cluster. However, I am not successfull in connecting to the Impala from PowerBI Desktop. I tried both VM names with dn0 and mn0 extension ([myhostname]-dn0.eastus2.cloudapp.azure.com) and ports 71890, 21000, 21050, based on this and this
It always fail at the authentication level. I tried anonymous, windows and DB authentication, and they all failed with this error:
Details: "ODBC: ERROR [08S01] [Microsoft][ImpalaODBC] (100) Error from the Impala Thrift API: connect() failed: errno = 10060"
Any help or clue is appreciated.
The port is 21050. You have to open it on Azure VM since it is not open by default.

Connection timeout when loging into vcenter through vsphere client

I am newbie to VMware. When I am longing into the VCenter I am getting "Connection time out" in first 3 attempts, after 3 attempts I am able to Login to VCenter.
I did some troubleshoot and in vcenter changed the Client to server time extended to 300sec. But still I am facing same issue. Can anyone please help me how to resolve this issue.
Thanks in advance
Are you connecting by IP or hostname? Make sure DNS is OK. Are you using a standalone MSSQL server? Windows server built-in MSSQL? Is this VCSA (linux appliance)?
If Windows, did you reboot everything? Slowness within vCenter can indicate a sql issue of sorts. Check all applicable items for available resources, including disk space on all partitions.

Can Not Connect Debian 8 VNC Server Google Cloud Compute

I've used the following guide in order to connect to a Debian 8 Server with GUI using a DigitalOcean server:
https://www.digitalocean.com/community/tutorials/how-to-set-up-vnc-server-on-debian-8
I know this works, however under Azure and now Google's Cloud Compute I am unable to connect. I think there should be some setting on Google's side that is blocking outside connections through VNC to the Debian 8 instance.
I only have the free support level, and I don't want to upgrade just to resolve this issue alone. Here is a screenshot from my console that perhaps has some relevant information:
Console Screenshot
I'd appreciate any input anybody could give me. I've tried trouble shooting this before under Azure, but after getting it to work on DigitalOcean, I know the problem isn't from my end.
The resolution was simple. I just had to allow the port tcp:5901 through Google's firewall in order to connect to my VNC server.