I referred to this question but I have a different question
Get VS Code Python extension to connect to Jupyter running on remote AWS EMR master node
Environment:
The Jupyter Notebook is on AWS EMR.
To access notebook from the browser using a SOCKS5 proxy. To do so, I have to connect to the VPN to work, SSH using PuTTY using .ppk file and Tunnel (Dynamic Port Forwarding).
To automate the step above, I am using PLink(Homepage | Docs | Download) : c:\stuff\plink.exe --ssh -i c:\stuff\file.ppk -D XXXX user-name#<some_IP_address
I can successfully enable proxy by using msedge.exe --proxy-server="socks5://<address>" or chrome.exe --proxy-server="socks5://<address>"
VS Code Version:
C:\Users\ablaze>code --version
1.55.2
3c4e3df9e89829dce27b7b5c24508306b151f30d
x64
Objective:
How can I access the remote Jupyter Notebook hosted on AWS EMR behind a proxy from Visual Studio Code?
I tried and failed:
Given Visual Studio Code is built on top of Electron and benefits from all the networking stack capabilities of Chromium.
I used Plink like above to SSH and then executed the following command on my windows command prompt: C:\Users\ablaze>code --proxy-server="socks5://<address>" --verbose
Error message:
[17668:0505/104410.116:WARNING:dns_config_service_win.cc(692)] Failed to read DnsConfig.
[main 2021-05-05T14:44:10.224Z] Starting VS Code
[main 2021-05-05T14:44:10.224Z] from: c:\Users\ablaze\AppData\Local\Programs\Microsoft VS Code\resources\app
[main 2021-05-05T14:44:10.224Z] args: {
_: [],
...
'no-proxy-server': false,
'proxy-server': 'socks5://localhost:8088',
...
logsPath: 'C:\\Users\\ablaze\\AppData\\Roaming\\Code\\logs\\20210505T104410'
}
...
[main 2021-05-05T14:44:10.258Z] windowsManager#open pathsToOpen [
{
backupPath: 'C:\\Users\\ablaze\\AppData\\Roaming\\Code\\Backups\\1620224957079',
remoteAuthority: undefined
}
]
To double check I also opened the log file C:\Users\ablaze\AppData\Roaming\Code\logs\20210505T104410\main.log
...
[2021-05-05 10:44:40.345] [main] [trace] update#checkForUpdates, state = idle
[2021-05-05 10:44:40.345] [main] [info] update#setState checking for updates
[2021-05-05 10:44:40.345] [main] [trace] RequestService#request https://update.code.visualstudio.com/api/update/win32-x64-user/stable/3c4e3df9e89829dce27b7b5c24508306b151f30d
[2021-05-05 10:44:40.346] [main] [trace] resolveShellEnv(): skipped (Windows)
[2021-05-05 10:44:44.354] [main] [error] Error: net::ERR_PROXY_CONNECTION_FAILED
at SimpleURLLoaderWrapper.<anonymous> (electron/js2c/browser_init.js:109:6508)
at SimpleURLLoaderWrapper.emit (events.js:315:20)
[2021-05-05 10:44:44.354] [main] [info] update#setState idle
Related
I'm just using WSO2 micro intregrator and having issue with CLI tools, mi: Unable to connect to host
i run the WSO2 Micro Integrator on a VM using command :
$micro-integrator.bat -DenableManagementApi
and i wanna Get information about one or more Carbon Apps using :
$mi show carbonapp --verbose
[INFO] Executed ManagementCLI (mi) on Thu, 25 Jul 2019 14:59:47 +07
[INFO] Show Carbon app called
[INFO] URL: https://localhost:9165/management/applications
$mi: Unable to connect to host
mi init --verbose
[INFO] Executed ManagementCLI (mi) on Thu, 25 Jul 2019 15:01:52 +07
[INFO] Init called
Enter following parameters to configure the cli
Host name(default localhost): localhost
Port number(default 9164): 9165
CLI configuration is successful
i expect Get information about one or more Carbon Apps
Can you check the port ManagementApi is running. By default it is 9164 since you have mentioned as 9165, is MI started with offset 11 ( default offset 10 )? Else you may find the offset in carbon.xml.
Following logs will be printed in console when MI is started with Management Api
[2019-07-28 21:05:05,318] [micro-integrator] INFO - PassThroughListeningIOReactorManager Pass-through EI_INTERNAL_HTTPS_INBOUND_ENDPOINT Listener started on 0.0.0.0:9164
Here "9164" is the port of it. Please make sure that it is started and the port it is started is same as what you define in init.
I have sonarqube running on the aws ec2 instance. I'm using the aws codebuild to build and spin up the container in aws on the same vpc. When I try to run sonar-scanner from the project root directory, it throws the error below:
INFO: SonarQube Scanner 3.0.3.778 INFO: Java 1.8.0_121 Oracle
Corporation (64-bit) INFO: Linux 4.9.43-17.38.amzn1.x86_64 amd64 INFO:
User cache: /root/.sonar/cache ERROR: SonarQube server
[http://34.253.186.73:9000/sonar] can not be reached INFO:
------------------------------------------------------------------------ INFO: EXECUTION FAILURE INFO:
------------------------------------------------------------------------ INFO: Total time: 5.415s INFO: Final Memory: 3M/57M INFO:
------------------------------------------------------------------------ ERROR: Error during SonarQube Scanner execution
org.sonarsource.scanner.api.internal.ScannerException: Unable to
execute SonarQube at
org.sonarsource.scanner.api.internal.IsolatedLauncherFactory$1.run(IsolatedLauncherFactory.java:84)
at
org.sonarsource.scanner.api.internal.IsolatedLauncherFactory$1.run(IsolatedLauncherFactory.java:71)
at java.security.AccessController.doPrivileged(Native Method)
Please advise. Thanks.
The error message indicates that the scanner was not able to reach the SonarQube server (failed to establish a connection).
You can try to access http://34.253.186.73:9000/sonar with your browser while debugging the problem.
I have tried to use docker toolbox to setup Hyperledger V1.0 in my local machines.
I according to this document:
http://hyperledger-fabric.readthedocs.io/en/latest/asset_setup.html
But when I tried to deploy chaincode.
$node deploy.js
I got an error message:
info: Returning a new winston logger with default configurations
info: [Chain.js]: Constructed Chain instance: name - fabric-client1, securityEnabled: true, TCert download batch size: 10, network mode: true
info: [Peer.js]: Peer.const - url: grpc://localhost:8051 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8055 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8056 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Client.js]: Failed to load user "admin" from local key value store
info: [FabricCAClientImpl.js]: Successfully constructed Fabric COP service client: endpoint - {"protocol":"http","hostname":"localhost","port":8054}
info: [crypto_ecdsa_aes]: This class requires a KeyValueStore to save keys, no store was passed in, using the default store C:\Users\daniel\.hfc-key-store
[2017-04-15 22:14:29.268] [ERROR] Helper - Error: Calling enrollment endpoint failed with error [Error: connect ECONNREFUSED 127.0.0.1:8054]
at ClientRequest.<anonymous> (C:\Users\daniel\node_modules\fabric-ca-client\lib\FabricCAClientImpl.js:304:12)
at emitOne (events.js:96:13)
at ClientRequest.emit (events.js:188:7)
at Socket.socketErrorListener (_http_client.js:310:9)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at emitErrorNT (net.js:1278:8)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
[2017-04-15 22:14:29.273] [ERROR] DEPLOY - Error: Failed to obtain an enrolled user
at ca_client.enroll.then.then.then.catch (C:\Users\daniel\helper.js:59:12)
at process._tickCallback (internal/process/next_tick.js:103:7)
events.js:160
throw er; // Unhandled 'error' event
^
Error: Connect Failed
at ClientDuplexStream._emitStatusIfDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:201:19)
at ClientDuplexStream._readsDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:169:8)
at readCallback (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:229:12)
Is this an question about unable to connect to ca? Or other causes?
Edit:
Environment:
OS: Windows 10 Professional Edition
Docker Toolbox: 17.04.0-ce
Go: 1.7.5
Node.js: 6.10.0
My steps:
1.Open Docker Quickstart Terminal and key commands.
$curl -L https://raw.githubusercontent.com/hyperledger/fabric/master/examples/sfhackfest/sfhackfest.tar.gz -o sfhackfest.tar.gz 2> /dev/null; tar -xvf sfhackfest.tar.gz
$docker-compose -f docker-compose-gettingstarted.yml build
$docker-compose -f docker-compose-gettingstarted.yml up -d
$docker ps
It has been confirmed that six containers have been activated
2.Download examples and install modules.
$curl -OOOOOO https://raw.githubusercontent.com/hyperledger/fabric-sdk-node/v1.0-alpha/examples/balance-transfer/{config.json,deploy.js,helper.js,invoke.js,query.js,package.json}
//This link didn't work, so I downloaded the required files from GitHub of fabric-sdk-node
$npm install --global windows-build-tools
$npm install
3.Try to deploy chaincode.
$node deploy.js
There were several problems, not the least of which that documentation was outdated and was for a preview release of Hyperledger Fabric. The docs are actually in the process of being removed as we need to update our examples / samples.
You mentioned Docker Toolbox - so are you trying to run all of this on Windows or Mac?
UPDATE:
So one of the issue with Docker Toolbox or Docker for Windows is that you cannot use localhost / 127.0.0.1 as the address when trying to communicate from apps on the host (even in the QuickStart Terminal) to the endpoints of the Docker containers. When the QuickStart Terminal first launches Docker, you'll see that it will output the IP address of the endpoint you should use when communicating with exposed ports.
I was having the same issue while following the latest "Writing Your First Application" tutorial (http://hyperledger-fabric.readthedocs.io/en/latest/write_first_app.html). I had installed all the pre-requisites and the fabric-samples and started the local network.
When I got to the step of enrolling the Admin user, $ node enrollAdmin.js, I was getting the same error message as above, Error: connect ECONNREFUSED, followed by the localhost domain.
As the first answer suggests, the root cause is that I'm running Docker Toolbox. I'm developing on an older Mac, OSX v10.9.5, so I couldn't use Docker for Mac.
To fix the issue, I replaced 'localhost' in the enrollAdmin.js code with the IP from Docker Toolbox.
Here are the steps I took:
Started Docker with Applications > Docker Quickstart Terminal
Copied the IP from this sentence: docker is configured to use the default machine with IP...
Opened the copy of enrollAdmin.js from fabric-samples/fabcar directory
Found this code:
// be sure to change the http to https when the CA is running TLS enabled
fabric_ca_client = new Fabric_CA_Client('http://localhost:7054', tlsOptions , 'ca.example.com', crypto_suite); // <-- This is the line to change
Replaced 'localhost' with the Docker IP, leaving the port :7054 as is.
Saved
Re-ran the command, $ node enrollAdmin.js
The script connected to the CA and successfully completed the Admin enrollment.
On to the next step!
I installed cloudera vm and started trying some basic stuff. First I just wanted to ls the hdfs directoires. so I issued the below command.
[cloudera#quickstart ~]$ hadoop fs -ls /
ls: Failed on local exception: java.net.SocketException: Network is unreachable; Host Details : local host is: "quickstart.cloudera/10.0.2.15"; destination host is: "quickstart.cloudera":8020;
though ps -fu hdfs says both namenode and data node is running. I checked the status using the service command.
[cloudera#quickstart ~]$ sudo service hadoop-hdfs-namenode status
Hadoop namenode is not running [FAILED]
Thinking all the problems will be resolved if I restart all the services, I executed the below command.
[cloudera#quickstart conf]$ sudo /home/cloudera/cloudera-manager --express --force
[QuickStart] Shutting down CDH services via init scripts...
[QuickStart] Disabling CDH services on boot...
[QuickStart] Starting Cloudera Manager daemons...
[QuickStart] Waiting for Cloudera Manager API...
[QuickStart] Configuring deployment...
Submitted jobs: 92
[QuickStart] Deploying client configuration...
Submitted jobs: 93
[QuickStart] Starting Cloudera Management Service...
Submitted jobs: 101
[QuickStart] Enabling Cloudera Manager daemons on boot...
Now I thought all services will be up so again checked the status of namenode service. Again it came failed.
[cloudera#quickstart ~]$ sudo service hadoop-hdfs-namenode status
Hadoop namenode is not running [FAILED]
Now I decided to manually stop and start the namenode service. Again not much use.
[cloudera#quickstart ~]$ sudo service hadoop-hdfs-namenode stop
no namenode to stop
Stopped Hadoop namenode: [ OK ]
[cloudera#quickstart ~]$ sudo service hadoop-hdfs-namenode status
Hadoop namenode is not running [FAILED]
[cloudera#quickstart ~]$ sudo service hadoop-hdfs-namenode start
starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-quickstart.cloudera.out
Failed to start Hadoop namenode. Return value: 1 [FAILED]
I checked the file /var/log/hadoop-hdfs/hadoop-hdfs-namenode-quickstart.cloudera.out . It just said below
log4j:ERROR Could not find value for key log4j.appender.RFA
log4j:ERROR Could not instantiate appender named "RFA".
I also checked /var/log/hadoop-hdfs/hadoop-cmf-hdfs-NAMENODE-quickstart.cloudera.log.out . Found below when I searched for error. Can anyone please suggest me what is the best way to get the services back on track. Unfortunately I am not able to access cloudera manager from browser. Anything that I can do from command line?
2016-02-24 21:02:48,105 WARN com.cloudera.cmf.event.publish.EventStorePublisherWithRetry: Failed to publish event: SimpleEvent{attributes={ROLE_TYPE=[NAMENODE], CATEGORY=[LOG_MESSAGE], ROLE=[hdfs-NAMENODE], SEVERITY=[IMPORTANT], SERVICE=[hdfs], HOST_IDS=[quickstart.cloudera], SERVICE_TYPE=[HDFS], LOG_LEVEL=[WARN], HOSTS=[quickstart.cloudera], EVENTCODE=[EV_LOG_EVENT]}, content=Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!, timestamp=1456295437905} - 1 of 17 failure(s) in last 79302s
java.io.IOException: Error connecting to quickstart.cloudera/10.0.2.15:7184
at com.cloudera.cmf.event.shaded.org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:249)
at com.cloudera.cmf.event.shaded.org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:198)
at com.cloudera.cmf.event.shaded.org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:133)
at com.cloudera.cmf.event.publish.AvroEventStorePublishProxy.checkSpecificRequestor(AvroEventStorePublishProxy.java:122)
at com.cloudera.cmf.event.publish.AvroEventStorePublishProxy.publishEvent(AvroEventStorePublishProxy.java:196)
at com.cloudera.cmf.event.publish.EventStorePublisherWithRetry$PublishEventTask.run(EventStorePublisherWithRetry.java:242)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Network is unreachable
You can try this:
check witch process is using the port 7184 of namenode (i.e netstat linux command)
and kill that and then restart
Or
change you namenode port from conf and restart hadoop
To start, I think this issue is related to the issue in this post. However, the fix for HBase shell connectivity suggested in comments did not work for me either, and I see no resolution.
Connecting to my Bigtable cluster from the HBase shell just hangs on any command. Example:
ubuntu:/opt/hbase-1.1.2# ./bin/hbase shell
2016-02-29 13:43:38,975 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-02-29 13:43:39,114 INFO [main] grpc.BigtableSession: Opening connection for projectId [removed], zoneId us-central1-b, clusterId [removed], on data host bigtable.googleapis.com, table admin host bigtabletableadmin.googleapis.com.
2016-02-29 13:43:39,191 INFO [BigtableSession-startup-0] grpc.BigtableSession: gRPC is using the JDK provider (alpn-boot jar)
2016-02-29 13:43:39,516 INFO [bigtable-connection-shared-executor-pool1-t2] io.RefreshingOAuth2CredentialsInterceptor: Refreshing the OAuth token
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.2, rcc2b70cf03e3378800661ec5cab11eb43fafe0fc, Wed Aug 26 20:11:27 PDT 2015
hbase(main):001:0> list
TABLE
The shell just hangs there indefinitely and does this on any command entered.
Here are the results CheckConfig utility:
ubuntu:/opt/hbase-1.1.2# ./bin/hbase com.google.cloud.bigtable.hbase.CheckConfig
User Agent: bigtable-hbase-1.1-0.2.2
Project ID: [removed]
Cluster Id: [removed]
ZoneId: us-central1-b
Cluster admin host: bigtableclusteradmin.googleapis.com
Table admin host: bigtabletableadmin.googleapis.com
Data host: bigtable.googleapis.com
Attempting credential refresh...
Exception in thread "main" java.lang.IllegalAccessError: tried to access field sun.security.ssl.Handshaker.localSupportedSignAlgs from class sun.security.ssl.ClientHandshaker
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:278)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:913)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:849)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1035)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1344)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1371)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1355)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1093)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
at com.google.bigtable.repackaged.com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:77)
at com.google.bigtable.repackaged.com.google.api.client.http.HttpRequest.execute(HttpRequest.java:965)
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:222)
at com.google.auth.oauth2.OAuth2Credentials.refresh(OAuth2Credentials.java:76)
at com.google.cloud.bigtable.hbase.CheckConfig.main(CheckConfig.java:68)
Here are the relevant versions and environment variables:
Linux ubuntu7 3.19.0-30-generic #34~14.04.1-Ubuntu SMP Fri Oct 2 22:09:39 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
java version "1.7.0_95"
export ALPN_VERSION=7.1.3.v20150130
export HBASE_CLASSPATH="$(pwd)/lib/bigtable/bigtable-hbase-1.1-0.2.2.jar"
export HBASE_OPTS="${HBASE_OPTS} -Xms1024m -Xmx2048m -Xbootclasspath/p:$(pwd)/lib/bigtable/alpn-boot-${ALPN_VERSION}.jar"
I'd appreciate any solutions/advice/hints at resolving this. Thanks!
You might wish to use our Quickstart for HBase Shell access - It should just work. (Take a look at the hbase-site & hbase-env after running quickstart to see how to configure in the future)
The 1.7.0_95 release of Java is incompatible w/ alpn-boot. We are moving all of our samples to use netty-tcnative-boringssl see the Managed-VM-GAE example for additional info.