I am trying to copy local files to my ec2 instance.
When I run this command:
scp -i keypair.pem process.py ubuntu#ip-xx-xxx-xx-xxx.compute-1.amazonaws.com:~/.
I get this error:
ssh: Could not resolve hostname ip-xx-xxx-xx-xxx.compute-1.amazonaws.com: nodename nor servname provided, or not known
lost connection
When I run this code:
scp -i keypair.pem process.py ubuntu#ip-xx-xxx-xx-xxx:~/.
It stalls for ~ 1 minute then I get this error:
ssh: connect to host ip-xx-xxx-xx-xxx port 22: Operation timed out
lost connection
Any ideas how to resolve?
The easiest way to use scp is to start with an SSH command that already works:
ssh -i keypair.pem ec2-user#1.2.3.4
Then, modify it to use scp:
scp -i keypair.pem foo.txt ec2-user#1.2.3.4:/tmp/
The only things changed were:
ssh becomes scp
Insert source filename
Append :/target/
Related
Create an EC2 instance and file system. I am trying to mount the file system, for which I use the following command:
sudo mount -t nfs4 fs-0d06d36f390aeXXXX.efs.us-east-2.amazonaws.com:/myDir
And I am getting this error:
mount.nfs4: Failed to resolve server fs-0d06d36f390aeXXXX.efs.us-east-2.amazonaws.com: No address associated with hostname
Could someone guide me to solve this?
You can solve this issue by using following command.
first you need to nslookup nfsdomain from working machine and take the IP address
nslookup fs-7dsqsssscas5e11sefs.eu-central-1.amazonaws.com
Server: 127.0.0.51
Address: 127.0.0.51#51
Non-authoritative answer:
Name: fs-7dsqsssscas5e11sefs.eu-central-1.amazonaws.com
Address: 10.143.27.147
Add the IP address into /etc/hosts with nfsdoamin
vi /etc/hosts
10.143.27.147 fs-7dsqsssscas5e11sefs.eu-central-1.amazonaws.com
and try to mount with following command
sudo mount -t nfs4 fs-7dsqsssscas5e11sefs.eu-central-1.amazonaws.com:/ /mnt
I'm trying to connect my local node here:
https://polkadot.js.org/apps/
I go to the top left coner and choosing "local node "
However I do not understand where to enter ip and port for my node.
How to do that?
Enable ssh tunneling to the server.
The first step is to enable ssh tunnel from the local host to the remote server where your parastate node has been hosted . The below command enable ssh tunnel to remote host xx.xx.xx.xx on port 9944 which is needed for polakdot.js.org endpoint:
ssh -N -L 9944:127.0.0.1:9944 -i C:\Users\sshkey\privatekey.ppk root#xx.xx.xx.xx
In case you are not using ssh key and would like to connect using traditional method by supplying password, use the below option
ssh -N -L 9944:127.0.0.1:9944 root#xx.xx.xx.xx
The below command enable ssh tunnel to remote host xx.xx.xx.xx on port 9944 which is needed for polakdot.js.org endpoint
ssh -N -L 9944:127.0.0.1:9944 -i C:\Users\sshkey\privatekey.ppk root#xx.xx.xx.xx
In case you are not using ssh key and would like to connect using traditional method by supplying password , use the below option
ssh -N -L 9944:127.0.0.1:9944 root#xx.xx.xx.xx
Enter the password when prompted/
You will just have empty hanging window . Don’t panic you are all good.
How can I connect to a bastion server in a AWS VPC using Ansible 2.x to perform a Docker swarm setup? I've seen this question and the official FAQ.
Already tried providing the following via --extra-vars:
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#my.bastion.server.com"' or even using ansible.cfg with the parameter above, or even something like:
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=600s -J ec2-
user#my.bastion.dns.com
I tried a lot of combinations but I’m always getting this error message running a ping command in a playbook:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the
host via ssh: ssh: connect to host 10.1.xx.xx port 22: Operation timed
out\r\n",
Probably worth mentioning that:
I’m able to connect to the private hosts in my VPC normally using ssh -J option, example: ssh -J user#my.bastion.server.com user#vpc.host.private.ip .
I’m using Ansible’s ec2.py dynamic inventory with ec2.ini configured to map the private ips for a given tag entry.
It was a ssh misconfiguration problem.
I was able to fix with the configuration with those parameters.
1) Ansible.cfg file
[ssh_connection]
ssh_args = -o ProxyCommand="ssh -W %h:%p -q $BASTION_USER#$BASTION_HOST" -o ControlPersist=600s
control_path=%(directory)s/%%h-%%r
pipelining = True
2) Ec2.ini file
[ec2]
regions = us-xxxx-x
destination_variable = private_ip_address
vpc_destination_variable = private_ip_address
3) Playbook Execution Command
export BASTION_USER=xxx-xxxx;
export BASTION_HOST=ec2-xx-xx-xx-xx.xxxxx.compute.amazonaws.com;
ansible-playbook -u ec2-xxxx \
-i ./inventory/ec2.py \
./playbook/ping.yml \
--extra-vars \
"var_hosts=tag_Name_test_private ansible_ssh_private_key_file=~/.ssh/my-test-key.pem" -vvv
And voila!
I have a RDS instance with mysql database and which can only be accessed by an ec2 instance running in AWS. Now i want to access my RDS instance from local machine using SSH tunneling. I searched a lot on the net but none of the solutions worked. Can any one please tell me how to do it step by step with working solution.
Any help will be highly appreciated!
I tried to run -
ssh -i myNewKey.pem -N -L 3306:myredinstance:3306 ec2-user#myec2-instance.
mysql -u dbauser -p -h 127.0.0.1 on mysql-js utility and it gave me error. Please see below :-
You can do it by setting up a ssh tunel
ssh -i /path/to/key -N -L 3306:an_rds_endpoint:3306 user#yourserver.com
Then connect locally
mysql -u myuser -p -h 127.0.0.1
I successfully connected to EC2 already, but when it connects, what is in my script run on my computer and not in the instance.
ssh -i key.pem -oStrictHostKeyChecking=no ubuntu#XXXXX
echo "Hello World" # run on my computer
you can do it this way
ssh -i key.pem -oStrictHostKeyChecking=no ubuntu#XXXXX 'bash -s' < your_script.sh
and have a local file your_script.sh with all the command you want to run on the ec2 instance