Google Cloud Bigtable HBase shell connectivity hangs - google-cloud-platform

To start, I think this issue is related to the issue in this post. However, the fix for HBase shell connectivity suggested in comments did not work for me either, and I see no resolution.
Connecting to my Bigtable cluster from the HBase shell just hangs on any command. Example:
ubuntu:/opt/hbase-1.1.2# ./bin/hbase shell
2016-02-29 13:43:38,975 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-02-29 13:43:39,114 INFO [main] grpc.BigtableSession: Opening connection for projectId [removed], zoneId us-central1-b, clusterId [removed], on data host bigtable.googleapis.com, table admin host bigtabletableadmin.googleapis.com.
2016-02-29 13:43:39,191 INFO [BigtableSession-startup-0] grpc.BigtableSession: gRPC is using the JDK provider (alpn-boot jar)
2016-02-29 13:43:39,516 INFO [bigtable-connection-shared-executor-pool1-t2] io.RefreshingOAuth2CredentialsInterceptor: Refreshing the OAuth token
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.2, rcc2b70cf03e3378800661ec5cab11eb43fafe0fc, Wed Aug 26 20:11:27 PDT 2015
hbase(main):001:0> list
TABLE
The shell just hangs there indefinitely and does this on any command entered.
Here are the results CheckConfig utility:
ubuntu:/opt/hbase-1.1.2# ./bin/hbase com.google.cloud.bigtable.hbase.CheckConfig
User Agent: bigtable-hbase-1.1-0.2.2
Project ID: [removed]
Cluster Id: [removed]
ZoneId: us-central1-b
Cluster admin host: bigtableclusteradmin.googleapis.com
Table admin host: bigtabletableadmin.googleapis.com
Data host: bigtable.googleapis.com
Attempting credential refresh...
Exception in thread "main" java.lang.IllegalAccessError: tried to access field sun.security.ssl.Handshaker.localSupportedSignAlgs from class sun.security.ssl.ClientHandshaker
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:278)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:913)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:849)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1035)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1344)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1371)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1355)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1093)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
at com.google.bigtable.repackaged.com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:77)
at com.google.bigtable.repackaged.com.google.api.client.http.HttpRequest.execute(HttpRequest.java:965)
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:222)
at com.google.auth.oauth2.OAuth2Credentials.refresh(OAuth2Credentials.java:76)
at com.google.cloud.bigtable.hbase.CheckConfig.main(CheckConfig.java:68)
Here are the relevant versions and environment variables:
Linux ubuntu7 3.19.0-30-generic #34~14.04.1-Ubuntu SMP Fri Oct 2 22:09:39 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
java version "1.7.0_95"
export ALPN_VERSION=7.1.3.v20150130
export HBASE_CLASSPATH="$(pwd)/lib/bigtable/bigtable-hbase-1.1-0.2.2.jar"
export HBASE_OPTS="${HBASE_OPTS} -Xms1024m -Xmx2048m -Xbootclasspath/p:$(pwd)/lib/bigtable/alpn-boot-${ALPN_VERSION}.jar"
I'd appreciate any solutions/advice/hints at resolving this. Thanks!

You might wish to use our Quickstart for HBase Shell access - It should just work. (Take a look at the hbase-site & hbase-env after running quickstart to see how to configure in the future)
The 1.7.0_95 release of Java is incompatible w/ alpn-boot. We are moving all of our samples to use netty-tcnative-boringssl see the Managed-VM-GAE example for additional info.

Related

VS Code use Jupyter Notebook behind proxy

I referred to this question but I have a different question
Get VS Code Python extension to connect to Jupyter running on remote AWS EMR master node
Environment:
The Jupyter Notebook is on AWS EMR.
To access notebook from the browser using a SOCKS5 proxy. To do so, I have to connect to the VPN to work, SSH using PuTTY using .ppk file and Tunnel (Dynamic Port Forwarding).
To automate the step above, I am using PLink(Homepage | Docs | Download) : c:\stuff\plink.exe --ssh -i c:\stuff\file.ppk -D XXXX user-name#<some_IP_address
I can successfully enable proxy by using msedge.exe --proxy-server="socks5://<address>" or chrome.exe --proxy-server="socks5://<address>"
VS Code Version:
C:\Users\ablaze>code --version
1.55.2
3c4e3df9e89829dce27b7b5c24508306b151f30d
x64
Objective:
How can I access the remote Jupyter Notebook hosted on AWS EMR behind a proxy from Visual Studio Code?
I tried and failed:
Given Visual Studio Code is built on top of Electron and benefits from all the networking stack capabilities of Chromium.
I used Plink like above to SSH and then executed the following command on my windows command prompt: C:\Users\ablaze>code --proxy-server="socks5://<address>" --verbose
Error message:
[17668:0505/104410.116:WARNING:dns_config_service_win.cc(692)] Failed to read DnsConfig.
[main 2021-05-05T14:44:10.224Z] Starting VS Code
[main 2021-05-05T14:44:10.224Z] from: c:\Users\ablaze\AppData\Local\Programs\Microsoft VS Code\resources\app
[main 2021-05-05T14:44:10.224Z] args: {
_: [],
...
'no-proxy-server': false,
'proxy-server': 'socks5://localhost:8088',
...
logsPath: 'C:\\Users\\ablaze\\AppData\\Roaming\\Code\\logs\\20210505T104410'
}
...
[main 2021-05-05T14:44:10.258Z] windowsManager#open pathsToOpen [
{
backupPath: 'C:\\Users\\ablaze\\AppData\\Roaming\\Code\\Backups\\1620224957079',
remoteAuthority: undefined
}
]
To double check I also opened the log file C:\Users\ablaze\AppData\Roaming\Code\logs\20210505T104410\main.log
...
[2021-05-05 10:44:40.345] [main] [trace] update#checkForUpdates, state = idle
[2021-05-05 10:44:40.345] [main] [info] update#setState checking for updates
[2021-05-05 10:44:40.345] [main] [trace] RequestService#request https://update.code.visualstudio.com/api/update/win32-x64-user/stable/3c4e3df9e89829dce27b7b5c24508306b151f30d
[2021-05-05 10:44:40.346] [main] [trace] resolveShellEnv(): skipped (Windows)
[2021-05-05 10:44:44.354] [main] [error] Error: net::ERR_PROXY_CONNECTION_FAILED
at SimpleURLLoaderWrapper.<anonymous> (electron/js2c/browser_init.js:109:6508)
at SimpleURLLoaderWrapper.emit (events.js:315:20)
[2021-05-05 10:44:44.354] [main] [info] update#setState idle

WSO2 CLI tools Unable to connect to host

I'm just using WSO2 micro intregrator and having issue with CLI tools, mi: Unable to connect to host
i run the WSO2 Micro Integrator on a VM using command :
$micro-integrator.bat -DenableManagementApi
and i wanna Get information about one or more Carbon Apps using :
$mi show carbonapp --verbose
[INFO] Executed ManagementCLI (mi) on Thu, 25 Jul 2019 14:59:47 +07
[INFO] Show Carbon app called
[INFO] URL: https://localhost:9165/management/applications
$mi: Unable to connect to host
mi init --verbose
[INFO] Executed ManagementCLI (mi) on Thu, 25 Jul 2019 15:01:52 +07
[INFO] Init called
Enter following parameters to configure the cli
Host name(default localhost): localhost
Port number(default 9164): 9165
CLI configuration is successful
i expect Get information about one or more Carbon Apps
Can you check the port ManagementApi is running. By default it is 9164 since you have mentioned as 9165, is MI started with offset 11 ( default offset 10 )? Else you may find the offset in carbon.xml.
Following logs will be printed in console when MI is started with Management Api
[2019-07-28 21:05:05,318] [micro-integrator] INFO - PassThroughListeningIOReactorManager Pass-through EI_INTERNAL_HTTPS_INBOUND_ENDPOINT Listener started on 0.0.0.0:9164
Here "9164" is the port of it. Please make sure that it is started and the port it is started is same as what you define in init.

Docker logs driver gcplogs: How to format logs so that they're shown correctly in Stackdriver?

tldr:
- I'm writing logs from a python application that's running outside of Google Cloud
- I'm using gcplogs to import logs to Stackdriver
- Logs imported must be in the wrong format because they're not displayed as they should in Stackdriver logging
I'm having a hard time figuring out how to configure the gcplogs Docker logs driver so that logs appear as they should in Stackdriver logging. Here's what the logs currently look like:
It looks like stackdriver isn't parsing these logs correctly. However, if the docker image is ran in GKE, the logs look correct:
Here Stackdriver has recognized that these logs were debug messages.
Since these two applications are using the same logger, I think the message that is logged from the application is in the right format, and that it must be the gcplogs configuration that's wrong or missing something. (See this repository for what the python code looks like)
Here's the content of /etc/docker/daemon.json on the machine that's running the Docker image:
{
"log-driver": "gcplogs",
"log-opts": {
"gcp-project": "removed",
"env": "host"
}
}
The output of docker info | grep 'Logging Driver' is Logging Driver: gcplogs.
The output of docker version is:
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.2
Git commit: 0520e24
Built: Wed Mar 21 23:05:52 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.03.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:14:54 2018
OS/Arch: linux/amd64
Experimental: false
Any tips on how to configure this so that the logs end up looking as they should in Stackdriver would be much appreciated.

H2O + HDFS (Cloudera)

We have a Cloudera cluster up and running with an h2o instance although it appears to be running off h2o.jar (which as I understand it--please correct me if incorrect) is the stand-alone h2o. I can connect, but it will not load any files from our HDFS. (all of this i can see via 'ps' on edge node.
So I started an instance with h2odriver.jar
java -jar /path/to/h2odriver.jar -nodes 2 -mapperXmx 5g -output /my/hdfs/dir
I get several output/callback addresses:
[Possible callback IP address: 10.96.243.46:33728]
[Possible callback IP address: 127.0.0.1]
Using mapper->driver callback IP address and port: 10.96.243.46:33728
So I fire up python and try and connect (same thing happens if I use 10.96.243.46):
>>>h2o.connection(ip='127.0.0.1', port='33728')
and get
'Connecting to H2O server at http://127.0.0.1:33728..... failed.
H2OConnectionError: COuld not estalich link to the H2O cloud http://127.0.0.1:33728 after 5 retries
...
Failed to establish a new connection:[Errno 111] Connection refused',))`
Thing is on my screen with the H2O jar/java job I can see:
`MapperToDriverMessage: Read invalid type (G) from socket, ignoring...
MapperToDriverMessage: read: Unknown Type `
I cannot figure out how to launch h2o in cluster mode and have it access our hdfs system or even connect. I can connect to the h2o.jar version, but that sees no hdfs (it can see the filesystem of the edgenode). What is the proper way to launch H2O so that it can see the attached HDFS system (We are running Cloudera 5.7 in a enterprise environment, Python is 3.6, H2O is 3.10.0.6 and I know we have a ton of firewalls/security-- i beleive we are setup through LDAP
You are correct that h2o.jar is meant to be the standalone version of H2O which is not meant for connecting to HDFS.
Using the appropriate h2odriver.jar for your particular hadoop distribution is the way to go.
The correct beginner instructions can be found here:
go to http://www.h2o.ai/download/
choose H2O "Latest Stable Release"
choose tab "Install on Hadoop"
It says to run the following command:
hadoop jar h2odriver.jar -nodes 1 -mapperXmx 6g -output hdfsOutputDirName
[ Note this is "hadoop jar", not "java -jar" as written in the question. ]
You should see output like this:
Determining driver host interface for mapper->driver callback...
[Possible callback IP address: 172.16.2.181]
[Possible callback IP address: 127.0.0.1]
...
Waiting for H2O cluster to come up...
H2O node 172.16.2.188:54321 requested flatfile
Sending flatfiles to nodes...
[Sending flatfile to node 172.16.2.188:54321]
H2O node 172.16.2.188:54321 reports H2O cluster size 1
H2O cluster (1 nodes) is up
(Note: Use the -disown option to exit the driver after cluster formation)
Open H2O Flow in your web browser: http://172.16.2.188:54321
(Press Ctrl-C to kill the cluster)
Blocking until the H2O cluster shuts down...
Then point your web browser to the place where it says to "Open H2O Flow in your web browser".
(The other addresses in the output are diagnostics, and not for end users.)
In this case, the python connection command would be:
h2o.connect(ip = '172.16.2.188', port = 54321)
I recommend going to Flow in a web browser, start importing a file by typing in "hdfs://", and seeing if autocompletion works. If it does, your HDFS connection is working.

SSH Connection disconnected

I'm a student from korea
first, i'm sorry about my low level english :)
I'm make a web service using AWS + nginx + django
I connect to AWS instance(ubuntu) using SSH protocol
Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-74-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Sat Apr 30 07:03:51 UTC 2016
System load: 0.0 Processes: 105
Usage of /: 23.8% of 7.74GB Users logged in: 0
Memory usage: 14% IP address for eth0: 172.31.17.137
Swap usage: 0%
Graph this data and manage this system at:
https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
21 packages can be updated.
17 updates are security updates.
Last login: Sat Apr 30 07:03:52 2016 from 210.103.124.253
pyenv-virtualenv: no virtualenv has been activated.
and
manage.py runserver --settings=abc.settings.production
So everyone can access my web service!
but.... after 30miniute
the SSL connection is broken itself....
export this message
packet_write_wait: Connection to 52.69.xxx.xxx: Broken pipe
and nobody can't access my web service...
so... my web site can't access when my computer was power off, none SSL connection...
I want everyone can access my web service 24/7
please give me a method thank you :)
When you want to run a command that continues after your current shell terminates, you should use the nohup command to launch it.
That causes the process to be detached from its initial parent shell so it is not killed when the parent terminates.