I'm struggling to get iscsiadm to connect from the iSCSI Initiator VM (using VirtualBox) to my iSCSI Target VM (also on VirtualBox).
E.g.
iscsiadm --mode discovery --type sendtargets --portal <ip address> --discover
iscsiadm: cannot make connection to <ip address>: Connection refused
There is a Host-only adapter network set up and I can SSH between the two VMs.
I disabled iptables to check if it was a firewall problem but was still getting the same problem.
Fwiw, on the Target machine I have:
# tgtadm --mode target --op show
Target 1: iqn.2014-03.my.target.server:tgt1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 55 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/vg_iscsi/lv_iscsi_1
Backing store flags:
Account information:
ACL information:
ALL
Any suggestions what else I can try?
Your client machine have another session logged into that target.
First you will have to logged out that target from your client, then discover using your new targetname.
Logout(change parameter accordingly):
iscsiadm --mode node --targetname iqn.2001-05.com.doe:test --portal 192.168.1.1:3260 --logout
If your client still does not discover, use 0.0.0.0 3260 as portal for this particular iqn in your server machine. This should work fine.
server:
o- portals .................................................................... [Portals: 1]
| | o- 0.0.0.0:3260 ..................................................................... [OK]
This is not a firewall issue.
me#here:~$ sudo iscsiadm -m discovery -t st -p 192.168.22.240
iscsiadm: cannot make connection to 192.168.22.240: Connection refused
iscsiadm: cannot make connection to 192.168.22.240: Connection refused
iscsiadm: cannot make connection to 192.168.22.240: Connection refused
^Ciscsiadm: caught SIGINT, exiting...
me#here:~$ sudo iscsiadm -m discoverydb -t st -p 192.168.22.240
# BEGIN RECORD 2.0-873
discovery.startup = manual
discovery.type = sendtargets
discovery.sendtargets.address = 192.168.22.240
discovery.sendtargets.port = 3260
discovery.sendtargets.auth.authmethod = None
discov...
Related
I have an AWS Elastic Beanstalk environment and application running the "64bit Amazon Linux 2 v3.0.2 running Docker" solution stack in a standard way. (I followed the AWS documentation.)
I have deployed a Dockerrun.aws.json file to it, and found that it has DNS issues.
To troubleshoot, I SSHed into the EC2 instance where it is running and found that on this instance an nslookup of any of the hostnames in question runs fine.
But, running the nslookup from within Docker such as with . . .
sudo run busybox nslookup www.google.com
. . . yields the result:
*** Can't find www.google.com: No answer
Adding the typical solutions such as passing the --dns x.x.x.x argument or --network=host arguments does not fix the issue. It still times out contacting DNS.
Any thoughts as to what the issue might be? Here is a Docker info:
$ sudo docker info
Client:
Debug Mode: false
Server:
Containers: 30
Running: 1
Paused: 0
Stopped: 29
Images: 4
Server Version: 19.03.6-ce
Storage Driver: devicemapper
Pool Name: docker-docker--pool
Pool Blocksize: 524.3kB
Base Device Size: 107.4GB
Backing Filesystem: ext4
Udev Sync Supported: true
Data Space Used: 5.403GB
Data Space Total: 12.72GB
Data Space Available: 7.314GB
Metadata Space Used: 3.965MB
Metadata Space Total: 16.78MB
Metadata Space Available: 12.81MB
Thin Pool Minimum Free Space: 1.271GB
Deferred Removal Enabled: true
Deferred Deletion Enabled: true
Deferred Deleted Device Count: 0
Library Version: 1.02.135-RHEL7 (2016-11-16)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: ff48f57fc83a8c44cf4ad5d672424a98ba37ded6
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.181-108.257.amzn1.x86_64
Operating System: Amazon Linux AMI 2018.03
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.79GiB
Name:
ID:
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
Live Restore Enabled: false
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
Thank you
I want to synchronize with the network test network. I create a new user with the following command:
sudo geth --datadir/data/ethereum/test --keystore /data/ethereum/test/keystore account new
Next, I run:
sudo geth --testnet --syncmode "fast" --datadir /data/ethereum/test --keystore /data/ethereum/test/keystore --maxpeers 20 --cache=1024 --rpc --rpcapi "db,eth,net,web3,personal" --rpcport=8545 --rpcaddr "0.0.0.0" --rpccorsdomain "*" console
An error occurs:
WARN [12-06 | 11: 29: 30.933] Failed account unlock attempt address = 0x06C55Ac0d9C14348D5b63FC693e134889340ecBa err = "cannot decrypt key with given passphrase".
I enter the password 100% correct.
Ports are open
tcp: 80, 8080, 443, 30000 - 30999, 8545.
udp: 30000 - 30999.
I connect to the server using PuTTY. Server: ubuntu-bionic-18.04-amd64. If you delete all accounts and create anew, this error will still occur. If the network is synchronized and I create a new account, it is still an error.
After installing neo4j on my aws ec2 instance, the following seems to indicate that the server is up.
# bin/neo4j console
Active database: graph.db
Directories in use:
home: /usr/local/share/neo4j-community-3.3.1
config: /usr/local/share/neo4j-community-3.3.1/conf
logs: /usr/local/share/neo4j-community-3.3.1/logs
plugins: /usr/local/share/neo4j-community-3.3.1/plugins
import: /usr/local/share/neo4j-community-3.3.1/import
data: /usr/local/share/neo4j-community-3.3.1/data
certificates: /usr/local/share/neo4j-community-3.3.1/certificates
run: /usr/local/share/neo4j-community-3.3.1/run
Starting Neo4j.
WARNING: Max 1024 open files allowed, minimum of 40000 recommended.
See the Neo4j manual.
2017-12-01 16:03:04.380+0000 INFO ======== Neo4j 3.3.1 ========
2017-12-01 16:03:04.447+0000 INFO Starting...
2017-12-01 16:03:05.986+0000 INFO Bolt enabled on 127.0.0.1:7687.
2017-12-01 16:03:11.206+0000 INFO Started.
2017-12-01 16:03:12.860+0000 INFO Remote interface available at
http://localhost:7474/
At this point I am not able to connect. I have opened up ports 7474 - and 7687 - and I can access port 80, plus ssh into the instance, etc.
Is this a neo4j or aws problem?
Any help is appreciated.
Colin Goldberg
Set the dbms.connectors.default_listen_address to be 0.0.0.0, then only open the SSL port located on 7473 using Amazon's ec2 security groups. Don't use 7474 if you don't have to.
It looks like Neo4j is only listening on the localhost interface. If your run netstat -a | grep 7474 you want to see something like *:7474. If you see something like localhost:7474 then you won't be able to connect to the port from outside.
Take a look at Configuring Neo4j connectors. I believe you want dbms.connectors.default_listen_address set to 0.0.0.0.
And now a warning - you are opening your Neo4j to the entire planet if you do this. That may be ok but it seems unlikely that this is what you want to do. The defaults are there for a reason - you don't want the entire planet being able to try to hack into your database. Use caution if you enable this.
I have just setup a four nodes Cassandra 3.4 cluster running centOS 7 on AWS. I was able to configure it and get all the nodes together. Now I would like to perform some tests, monitoring the cluster behavior using the devops center which I installed on one machine.
I thought using ssh-tunneling to access it from my computer:
ssh -i Amazon-EC2-Ami.pem -L 9999:localhost:8888 centos#public_address
Using my browser, localhost:9999 gets correctly tunneled to the devops login page: http://localhost:8888/opscenter/login.html, but I got a ERR_CONNECTION_REFUSED
I tried accessing devops on that machine using a command line browser and it displays the login page. I really do not know what the issue could be. Any information is truly appreciated. This is the cassandra-yaml configuration file, in case it helps:
cluster_name: 'Cloak'
listen_address:
endpoint_snitch: GossipingPropertyFileSnitch
rpc_address:
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "172.31.55.186,172.31.55.187"
EDIT
Using the -v option launching the ssh tunneling I can confirm that the requests are correctly tunneled:
[centos#ip-172-31-55-186 ~]$ debug1: Connection to port 9999
forwarding to localhost port 8888 requested.
debug1: channel 3: new [direct-tcpip]
debug1: Connection to port 9999 forwarding to localhost port 8888
requested.
debug1: channel 4: new [direct-tcpip]
debug1: channel 3: free: direct-tcpip: listening port 9999 for
localhost port 8888, connect from 127.0.0.1 port 43846 to 127.0.0.1
port 9999, nchannels 5
Finally I managed to access it from my computer. I had to modify the configuration file for the operation center, located in /etc/opscenter/opscenterd.conf (only for package installation):
[webserver]
port = 8888
interface = 127.0.0.1
By default the webserver accepts requests only from the localhost. Probably it won't be the best option, but since the operation center allows to configure users, I set interface = 0.0.0.0, allowing any host to contact it.
I have just created an EC2 instance on a brand new AWS account, behind a security group, and loaded some software on it. I am running Sinatra on the machine on port 4567 (currently), and have opened that port in my security group to whole world. Further, I am able to ssh into the EC2 instance, but I cannot connect on port 4567. I am using the public IP to connect:
shakuras:~ tyler$ curl **.***.**.***:22
SSH-2.0-OpenSSH_6.2p2 Ubuntu-6ubuntu0.1
curl: (56) Recv failure: Connection reset by peer
shakuras:~ tyler$ curl **.***.**.***:4567
curl: (7) Failed connect to **.***.**.***:4567; Connection refused
But my webserver is running, since I can see the site when I curl from localhost:
ubuntu#ip-172-31-8-160:~$ curl localhost:4567
Hello world! Welcome
I thought it might be the firewall but I ran iptables and got:
ubuntu#ip-172-31-8-160:~$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
I'm pretty lost on what is going on here. Why can't I connect from the outside world?
Are you sure that the web server is listening on other interfaces than localhost?
Check the output of
netstat -an | grep 4567
If it isn't listening on 0.0.0.0 then that is the cause.
This sounds like issue with the Sinatra binding. Could check this and this and even this link which talks about binding Sinatra to all IP addresses.
You are listening on 127.0.0.1 based on your netstat command. This is what the output should be something like this:
tcp 0 0 :::8080 :::* LISTEN
Can you post your Sinatra configs? What are you using to start it ?
This doesnot work on a simple Amazon AMI , with installation as shown in http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-install.html
Step 1 , 2, 3 works (agent installation and starting demon ) as shown
[ec2-user#ip-<ip> ~]$ curl http://localhost:51678/v1/metadata
curl: (7) Failed to connect to localhost port 51678: Connection refused
infact netstat shows some listening tcp ports but one able to connect , definitely not 51678 tcp .
If you're using Amazon EC2 and make sure that you have security rule in Custom TCP for 0.0.0.0 in security groups, and still can't connect; try adding 0.0.0.0 to first line of the /etc/hosts by
sudo nvim /etc/hosts
add space to the last ip on the first line, and it should look like
127.0.0.1 localhost 0.0.0.0