I'm working on different web application in my new society and I get this issue each time I would like to use Elasticsearch (version 6.3.2) :
elasticsearch.exceptions.ConnectionError: ConnectionError(<urllib3.connection.HTTPConnection object at 0x7f4e2ab26438>: Failed to establish a new connection: [Errno 111] Connection refused) caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x7f4e2ab26438>: Failed to establish a new connection: [Errno 111] Connection refused)
For example with Django, I execute these commands :
sudo service elasticsearch start
then
python manage.py indexdocs
And I get this issue as below. I tried to make :
curl -XGET http://localhost:9200
And I obtain this : curl: (7) Failed to connect to localhost port 9200: Connexion refusée
Do you have any idea about this ? I have to authorized something somewhere ?
Thank you
EDIT :
In /var/log/elasticsearch/elasticsearch.log I have a loop issue :
[2018-08-28T09:27:56,673][INFO ][o.e.n.Node ] [] initializing ...
[2018-08-28T09:27:56,733][INFO ][o.e.e.NodeEnvironment ] [DRmGsVp] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [25.5gb], net total_space [39.1gb], types [ext4]
[2018-08-28T09:27:56,734][INFO ][o.e.e.NodeEnvironment ] [DRmGsVp] heap size [1.9gb], compressed ordinary object pointers [true]
[2018-08-28T09:27:56,749][INFO ][o.e.n.Node ] [DRmGsVp] node name derived from node ID [DRmGsVpYQ8W4E4JTZoM1Lw]; set [node.name] to override
[2018-08-28T09:27:56,749][INFO ][o.e.n.Node ] [DRmGsVp] version[6.4.0], pid[6436], build[default/deb/595516e/2018-08-17T23:18:47.308994Z], OS[Linux/4.15.0-33-generic/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_181/25.181-b13]
[2018-08-28T09:27:56,749][INFO ][o.e.n.Node ] [DRmGsVp] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.n$
[2018-08-28T09:27:57,892][ERROR][o.e.b.Bootstrap ] Exception
java.lang.IllegalArgumentException: Plugin [ingest-attachment] was built for Elasticsearch version 6.3.2 but version 6.4.0 is running
at org.elasticsearch.plugins.PluginsService.verifyCompatibility(PluginsService.java:339) ~[elasticsearch-6.4.0.jar:6.4.0]
the problem is about incompatibility between ingest-plugin and elasticsearch.
I suggest download ingest-plugin version 6.4. or remove it.
sudo bin/elasticsearch-plugin install ingest-attachment
sudo bin/elasticsearch-plugin remove ingest-attachment
This plugin can be downloaded for offline install from https://artifacts.elastic.co/downloads/elasticsearch-plugins/ingest-attachment/ingest-attachment-6.4.0.zip.
offline install:
To install a plugin from your local file system at /path/to/plugin.zip, you could run:
sudo bin/elasticsearch-plugin install file:///path/to/plugin.zip
Related
I'm using aws and my environment is:
AMI : Deep Learning AMI GPU CUDA 11.4.1 (Ubuntu 18.04) 20211204
g4dn.xlarge instance (T4)
and install DCGM refer to this link https://developer.nvidia.com/dcgm
(The version was modified from ubuntu20 to 18 and installed)
after install DCGM, When I enter that statement, I get the following error
Error: unable to establish a connection to the specified host: localhost
Error: Unable to connect to host engine. Host engine connection invalid/disconnected.
I referred to this question but I have a different question
Get VS Code Python extension to connect to Jupyter running on remote AWS EMR master node
Environment:
The Jupyter Notebook is on AWS EMR.
To access notebook from the browser using a SOCKS5 proxy. To do so, I have to connect to the VPN to work, SSH using PuTTY using .ppk file and Tunnel (Dynamic Port Forwarding).
To automate the step above, I am using PLink(Homepage | Docs | Download) : c:\stuff\plink.exe --ssh -i c:\stuff\file.ppk -D XXXX user-name#<some_IP_address
I can successfully enable proxy by using msedge.exe --proxy-server="socks5://<address>" or chrome.exe --proxy-server="socks5://<address>"
VS Code Version:
C:\Users\ablaze>code --version
1.55.2
3c4e3df9e89829dce27b7b5c24508306b151f30d
x64
Objective:
How can I access the remote Jupyter Notebook hosted on AWS EMR behind a proxy from Visual Studio Code?
I tried and failed:
Given Visual Studio Code is built on top of Electron and benefits from all the networking stack capabilities of Chromium.
I used Plink like above to SSH and then executed the following command on my windows command prompt: C:\Users\ablaze>code --proxy-server="socks5://<address>" --verbose
Error message:
[17668:0505/104410.116:WARNING:dns_config_service_win.cc(692)] Failed to read DnsConfig.
[main 2021-05-05T14:44:10.224Z] Starting VS Code
[main 2021-05-05T14:44:10.224Z] from: c:\Users\ablaze\AppData\Local\Programs\Microsoft VS Code\resources\app
[main 2021-05-05T14:44:10.224Z] args: {
_: [],
...
'no-proxy-server': false,
'proxy-server': 'socks5://localhost:8088',
...
logsPath: 'C:\\Users\\ablaze\\AppData\\Roaming\\Code\\logs\\20210505T104410'
}
...
[main 2021-05-05T14:44:10.258Z] windowsManager#open pathsToOpen [
{
backupPath: 'C:\\Users\\ablaze\\AppData\\Roaming\\Code\\Backups\\1620224957079',
remoteAuthority: undefined
}
]
To double check I also opened the log file C:\Users\ablaze\AppData\Roaming\Code\logs\20210505T104410\main.log
...
[2021-05-05 10:44:40.345] [main] [trace] update#checkForUpdates, state = idle
[2021-05-05 10:44:40.345] [main] [info] update#setState checking for updates
[2021-05-05 10:44:40.345] [main] [trace] RequestService#request https://update.code.visualstudio.com/api/update/win32-x64-user/stable/3c4e3df9e89829dce27b7b5c24508306b151f30d
[2021-05-05 10:44:40.346] [main] [trace] resolveShellEnv(): skipped (Windows)
[2021-05-05 10:44:44.354] [main] [error] Error: net::ERR_PROXY_CONNECTION_FAILED
at SimpleURLLoaderWrapper.<anonymous> (electron/js2c/browser_init.js:109:6508)
at SimpleURLLoaderWrapper.emit (events.js:315:20)
[2021-05-05 10:44:44.354] [main] [info] update#setState idle
I have tried to use docker toolbox to setup Hyperledger V1.0 in my local machines.
I according to this document:
http://hyperledger-fabric.readthedocs.io/en/latest/asset_setup.html
But when I tried to deploy chaincode.
$node deploy.js
I got an error message:
info: Returning a new winston logger with default configurations
info: [Chain.js]: Constructed Chain instance: name - fabric-client1, securityEnabled: true, TCert download batch size: 10, network mode: true
info: [Peer.js]: Peer.const - url: grpc://localhost:8051 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8055 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8056 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Client.js]: Failed to load user "admin" from local key value store
info: [FabricCAClientImpl.js]: Successfully constructed Fabric COP service client: endpoint - {"protocol":"http","hostname":"localhost","port":8054}
info: [crypto_ecdsa_aes]: This class requires a KeyValueStore to save keys, no store was passed in, using the default store C:\Users\daniel\.hfc-key-store
[2017-04-15 22:14:29.268] [ERROR] Helper - Error: Calling enrollment endpoint failed with error [Error: connect ECONNREFUSED 127.0.0.1:8054]
at ClientRequest.<anonymous> (C:\Users\daniel\node_modules\fabric-ca-client\lib\FabricCAClientImpl.js:304:12)
at emitOne (events.js:96:13)
at ClientRequest.emit (events.js:188:7)
at Socket.socketErrorListener (_http_client.js:310:9)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at emitErrorNT (net.js:1278:8)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
[2017-04-15 22:14:29.273] [ERROR] DEPLOY - Error: Failed to obtain an enrolled user
at ca_client.enroll.then.then.then.catch (C:\Users\daniel\helper.js:59:12)
at process._tickCallback (internal/process/next_tick.js:103:7)
events.js:160
throw er; // Unhandled 'error' event
^
Error: Connect Failed
at ClientDuplexStream._emitStatusIfDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:201:19)
at ClientDuplexStream._readsDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:169:8)
at readCallback (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:229:12)
Is this an question about unable to connect to ca? Or other causes?
Edit:
Environment:
OS: Windows 10 Professional Edition
Docker Toolbox: 17.04.0-ce
Go: 1.7.5
Node.js: 6.10.0
My steps:
1.Open Docker Quickstart Terminal and key commands.
$curl -L https://raw.githubusercontent.com/hyperledger/fabric/master/examples/sfhackfest/sfhackfest.tar.gz -o sfhackfest.tar.gz 2> /dev/null; tar -xvf sfhackfest.tar.gz
$docker-compose -f docker-compose-gettingstarted.yml build
$docker-compose -f docker-compose-gettingstarted.yml up -d
$docker ps
It has been confirmed that six containers have been activated
2.Download examples and install modules.
$curl -OOOOOO https://raw.githubusercontent.com/hyperledger/fabric-sdk-node/v1.0-alpha/examples/balance-transfer/{config.json,deploy.js,helper.js,invoke.js,query.js,package.json}
//This link didn't work, so I downloaded the required files from GitHub of fabric-sdk-node
$npm install --global windows-build-tools
$npm install
3.Try to deploy chaincode.
$node deploy.js
There were several problems, not the least of which that documentation was outdated and was for a preview release of Hyperledger Fabric. The docs are actually in the process of being removed as we need to update our examples / samples.
You mentioned Docker Toolbox - so are you trying to run all of this on Windows or Mac?
UPDATE:
So one of the issue with Docker Toolbox or Docker for Windows is that you cannot use localhost / 127.0.0.1 as the address when trying to communicate from apps on the host (even in the QuickStart Terminal) to the endpoints of the Docker containers. When the QuickStart Terminal first launches Docker, you'll see that it will output the IP address of the endpoint you should use when communicating with exposed ports.
I was having the same issue while following the latest "Writing Your First Application" tutorial (http://hyperledger-fabric.readthedocs.io/en/latest/write_first_app.html). I had installed all the pre-requisites and the fabric-samples and started the local network.
When I got to the step of enrolling the Admin user, $ node enrollAdmin.js, I was getting the same error message as above, Error: connect ECONNREFUSED, followed by the localhost domain.
As the first answer suggests, the root cause is that I'm running Docker Toolbox. I'm developing on an older Mac, OSX v10.9.5, so I couldn't use Docker for Mac.
To fix the issue, I replaced 'localhost' in the enrollAdmin.js code with the IP from Docker Toolbox.
Here are the steps I took:
Started Docker with Applications > Docker Quickstart Terminal
Copied the IP from this sentence: docker is configured to use the default machine with IP...
Opened the copy of enrollAdmin.js from fabric-samples/fabcar directory
Found this code:
// be sure to change the http to https when the CA is running TLS enabled
fabric_ca_client = new Fabric_CA_Client('http://localhost:7054', tlsOptions , 'ca.example.com', crypto_suite); // <-- This is the line to change
Replaced 'localhost' with the Docker IP, leaving the port :7054 as is.
Saved
Re-ran the command, $ node enrollAdmin.js
The script connected to the CA and successfully completed the Admin enrollment.
On to the next step!
I'm installing a modern GoCD (16.7) on an Ubuntu machine. openjdk-8 (jre and jdk). The agents (on localhost) fail to connect to the server:
[Sat Jul 30 05:58:47 UTC 2016] Starting Go Agent Bootstrapper with command:
/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
-jar /usr/share/go-agent3/agent-bootstrapper.jar
-serverUrl https://127.0.0.1:8154/go/
...
java.lang.Exception: Couldn't access Go Server with base url:
https://127.0.0.1:8154/go/admin/agent-launcher.jar:
java.net.SocketException: Broken pipe
at com.thoughtworks.go.agent.launcher.ServerCall.invoke(ServerCall.java:78)
and
2016-07-30 06:00:48,790 [main ] ERROR go.agent.launcher.ServerBinaryDownloader:118
- Couldn't update admin/agent-launcher.jar. Sleeping for 1m.
Error: java.lang.Exception: Couldn't access Go Server with base url:
https://127.0.0.1:8154/go/admin/agent-launcher.jar:
java.net.SocketException: Broken pipe
(I manually wrapped those lines for readability)
The server is actually accessible. For instance:
$ curl --silent --insecure https://127.0.0.1:8154/go/ | head -2
<!-- *************************GO-LICENSE-START******************************
* Copyright 2014 ThoughtWorks, Inc.
Yes, I'm using --insecure, but gocd ships with a self-signed cert. It's standard practice. Some of the things I've seen said "oh, you are blocking your port" but this is to localhost.
Are your GOCD server and agent using identical versions of Java? We have found they must be the same because the certificates have to match. See chatter
I installed cloudera vm and started trying some basic stuff. First I just wanted to ls the hdfs directoires. so I issued the below command.
[cloudera#quickstart ~]$ hadoop fs -ls /
ls: Failed on local exception: java.net.SocketException: Network is unreachable; Host Details : local host is: "quickstart.cloudera/10.0.2.15"; destination host is: "quickstart.cloudera":8020;
though ps -fu hdfs says both namenode and data node is running. I checked the status using the service command.
[cloudera#quickstart ~]$ sudo service hadoop-hdfs-namenode status
Hadoop namenode is not running [FAILED]
Thinking all the problems will be resolved if I restart all the services, I executed the below command.
[cloudera#quickstart conf]$ sudo /home/cloudera/cloudera-manager --express --force
[QuickStart] Shutting down CDH services via init scripts...
[QuickStart] Disabling CDH services on boot...
[QuickStart] Starting Cloudera Manager daemons...
[QuickStart] Waiting for Cloudera Manager API...
[QuickStart] Configuring deployment...
Submitted jobs: 92
[QuickStart] Deploying client configuration...
Submitted jobs: 93
[QuickStart] Starting Cloudera Management Service...
Submitted jobs: 101
[QuickStart] Enabling Cloudera Manager daemons on boot...
Now I thought all services will be up so again checked the status of namenode service. Again it came failed.
[cloudera#quickstart ~]$ sudo service hadoop-hdfs-namenode status
Hadoop namenode is not running [FAILED]
Now I decided to manually stop and start the namenode service. Again not much use.
[cloudera#quickstart ~]$ sudo service hadoop-hdfs-namenode stop
no namenode to stop
Stopped Hadoop namenode: [ OK ]
[cloudera#quickstart ~]$ sudo service hadoop-hdfs-namenode status
Hadoop namenode is not running [FAILED]
[cloudera#quickstart ~]$ sudo service hadoop-hdfs-namenode start
starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-quickstart.cloudera.out
Failed to start Hadoop namenode. Return value: 1 [FAILED]
I checked the file /var/log/hadoop-hdfs/hadoop-hdfs-namenode-quickstart.cloudera.out . It just said below
log4j:ERROR Could not find value for key log4j.appender.RFA
log4j:ERROR Could not instantiate appender named "RFA".
I also checked /var/log/hadoop-hdfs/hadoop-cmf-hdfs-NAMENODE-quickstart.cloudera.log.out . Found below when I searched for error. Can anyone please suggest me what is the best way to get the services back on track. Unfortunately I am not able to access cloudera manager from browser. Anything that I can do from command line?
2016-02-24 21:02:48,105 WARN com.cloudera.cmf.event.publish.EventStorePublisherWithRetry: Failed to publish event: SimpleEvent{attributes={ROLE_TYPE=[NAMENODE], CATEGORY=[LOG_MESSAGE], ROLE=[hdfs-NAMENODE], SEVERITY=[IMPORTANT], SERVICE=[hdfs], HOST_IDS=[quickstart.cloudera], SERVICE_TYPE=[HDFS], LOG_LEVEL=[WARN], HOSTS=[quickstart.cloudera], EVENTCODE=[EV_LOG_EVENT]}, content=Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!, timestamp=1456295437905} - 1 of 17 failure(s) in last 79302s
java.io.IOException: Error connecting to quickstart.cloudera/10.0.2.15:7184
at com.cloudera.cmf.event.shaded.org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:249)
at com.cloudera.cmf.event.shaded.org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:198)
at com.cloudera.cmf.event.shaded.org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:133)
at com.cloudera.cmf.event.publish.AvroEventStorePublishProxy.checkSpecificRequestor(AvroEventStorePublishProxy.java:122)
at com.cloudera.cmf.event.publish.AvroEventStorePublishProxy.publishEvent(AvroEventStorePublishProxy.java:196)
at com.cloudera.cmf.event.publish.EventStorePublisherWithRetry$PublishEventTask.run(EventStorePublisherWithRetry.java:242)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Network is unreachable
You can try this:
check witch process is using the port 7184 of namenode (i.e netstat linux command)
and kill that and then restart
Or
change you namenode port from conf and restart hadoop