I dont want to attach eip to my chef-client - amazon-web-services

my chef server is in vpc i want to execute this command without eip
knife ec2 server create -r "role[test1]" -I ami-axxxxx --flavor t1.micro -x ubuntu --ssh-key JP_Key -Z us-east-1c --subnet subnet-c1b6d5a8 -g sg-b1e70bde -p 22 --fqdn mynewclientnode.example.com --tags Name=test_knife
im getting this error
ERROR: Net::SSH::HostKeyMismatch: fingerprint 5f:4b:f6:4d:9b:8a:88:a0:9d:fd:9f:ea:5c:ad:31:ef does not match for "10.220.15.174"
10.220.15.174 is ip of newly launched instance.
when i attach eip chef-client is instanlling.
Is there any way to do it.

This is not a Chef, knife, or AWS error. For security reasons, SSH stores the fingerprints of systems in a local cache the first time you connect. If that fingerprint changes (like if you re-provision a server using the same FQDN), SSH will throw this error. This is primarily to prevent MITM attacks (where you would be logging into a server that isn't what you think).
To fix this error, remove that fingerprint from your ~/.ssh/known_hosts file and run the command again.

Related

Unable to SSH into EC2 server after reboot

I have an ubuntu 18.04 based EC2 instance using an Elastic IP Address. I am able to SSH into the instance without any problems.
apt is executing some unattended updates on the instance. If I reboot the system after the updates, I am no longer able to SSH into the system. I am getting the error ssh: connect to host XXX port 22: Connection refused
Few points:
Even after the updates, I am able to SSH before the reboot
Method of restart does not make a difference. sudo shutdown -r now and EC2 dashboard have the same result.
There are no problems with sshd_config. I've detached the volume and attached it to a new working instance. sshd -t did not report any problems either
I am able to do sudo systemctl restart ssh.service after the updates but before the system restart.
I've tried with and without Elastic IP. Same result
From the system logs, I see that SSH is trying to start, but failing for some reason
I want to find out why the ssh daemon is not starting. Any pointers?
Update:
System Logs
Client Logs
No changes in the security groups before and after reboot
EC2 > Network & Security > Security Groups > Edit inblound rules > SSH 0.0.0.0/0
Step 1: EC2 > Instances > Actions > Image and templates > Create image
Step 2: Launch a new instance using the AMI image.
I missed the error Failed to start Create Static Device Nodes in /dev. in system logs. The solution given at https://askubuntu.com/questions/1301750/ubuntu-16-04-failed-to-start-create-static-device-nodes-in-dev helped solve my problem

Unable to connect to EC2 Linux instance in AWS. Error: Host key verification failed

I have created an EC2 Linux Instance in AWS. I used Ubuntu Server 20.04 LTS (HVM) AMI. After create the instance I was downloaded the key pair file (.pem). I gave it a name "EC2-Key-Pair". Then I launched the instance. Then in my Kali Linux system I open a Linux terminal where I saved the .pem file. After that I used this command:
chmod 400 EC2-Key-Pair
After run this command, I used this command:
ssh -i "EC2-Key-Pair.pem" ubuntu#ec2-13-232-252-152.ap-south-1.compute.amazonaws.com
Where ubuntu is the username and
ubuntu#ec2-13-232-252-152.ap-south-1.compute.amazonaws.com
is the Public IPv4 DNS of my instance. But when I executed this command I get this error:
Host key verification failed.
How to fix this error. I have executed this command using sudo and not using sudo. But both way was failed. Even I searched the error on internet, I found a solution that by using this command I can fix this error:
ssh-keygen -R Hostname
Where I used my instance's public IPv4 DNS as Hostname:
ssh-keygen -R ec2-13-232-252-152.ap-south-1.compute.amazonaws.com
But it shows an error that:
Cannot stat /home/sanniddha/.ssh/known_hosts: No such file or directory
Error after execute the SSH command as root user
Error after execute the SSH command
Error after execute ssh-keygen -R Hostname
This error means that there is something changed in your instance since the last login, and most properly
you created the EC2 instance, with No fixed IP assigned to this instance. so
When you start this instance, it will get (dynamic) IP and a DNS name which will be based on that IP.
If you shutdown the instance and start it again few hours later, it might get a new IP and a new DNS name.
The trouble you are getting because of the ssh key fingerprint changed. In general, it is not a bad thing and you accept the warning but double-check everything.
What is an SSH key fingerprint and how is it generated?
What can cause a changed ssh fingerprint
In your case, it might be because you launched an instance earlier and which has a similar DNS name that got added to ~/.ssh/known_hosts file.
xx.xx.xx.xx ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP2oAPXOCdClEnRzlXuxKtygT3AROcruefiPi6JPdzo+=
You can clean ~/.ssh/known_hosts by issueing following command
ssh-keygen -R ec2-13-232-252-152.ap-south-1.compute.amazonaws.com
As the IP got recycled on AWS side for the instance when you launched a new instance. The new instance has a different ssh fingerprint from the one you have in your ~/.ssh/known_hosts file, hence the warning.
As pointed out already, you need to open port 22 for your IP to access the instance.
If possible use IP address instead of DNS name for ssh. Plus for ssh you don't need sudo

Can't open a ssh tunnel from my linux shell (EC2 exposing an RDS db)

I'm struggling trying to open an ssh tunnel to access an RDS MySql instance through an EC2 bastion host. Using desktop clients (Navicat, MysqlWorkBench) with ssh tunnel set everything works as expected but when I run ssh -i keys.pem user#ec2-instance -L 3307:rds-mysql-instance:3306 -N in my terminal the command hangs indefinitely.
I can access my EC2 instance using ssh -i keys.pem user#ec2-instance and from my EC2 instance I can access the RDS database
Am I missing something in the configuration?
I also tried to open all ports on my Security Group just to be sure that it wasn't a port related issue.
Any help/idea?
Based on the comments.
To identify the issue, more verbose output from ssh can be requested using -v, -vv or even -vvv flags. Thus, the command for debugging can be:
ssh -i keys.pem user#ec2-instance -L 3307:rds-mysql-instance:3306 -N -vv
The detailed output allowed to identify the issue with the connection and fix it.

'Waiting for SSH to be available' during creation of a generic docker machine with amazon aws

I'm trying to use docker-machine with my docker instance hosted on amazon aws.
I run the following command:
$ sudo docker-machine create --driver generic --generic-ip-address={EC2 IP} --generic-ssh-key ~/.ssh/id_rsa dockeraws
Running pre-create checks...
Creating machine...
(dockering) Importing SSH key...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Error creating machine: Error detecting OS: Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded
But It stucks on 'Waiting for SSH to be available...' and I don't know why.
I've also opened the ports '22' and '2376' but it's still not working.
For my instance I'm using the template stated on the docker page here -> https://docs.docker.com/docker-for-aws/
Try adding your machine ip address to allowed host to the security group used by your ec2 instance. This solved the issue for me.
Generate a ssh key (if you didn't):
ssh-keygen
Then install your public key in the server using ssh-copy-id:
ssh-copy-id user#remote-server
Where user is your remote user and remote-server your server IP/URL.

Sync files between two compute engine instances with internal IPs

I'm working on GCP project in which, I have numerous small files on instance-A and I need to transfer them to instance-B. The transfer is working fine over Rsync with external IP. Both not working when I try to use internal Ip.
How can I sync files between my 2 instances with internal IPs?
Help me, please!
Thanks in Advance!
You need to check if you are unable to access via ssh from "instance-A" to instance-B using the internal IP of "instance-B" because of a "Permission denied (publickey)" error.
From instance A, run:
ssh [user]#[internal IP of instance B]
If this is the case, you can generate new keys with ssh-keygen:
ssh-keygen -t rsa -f ~/.ssh/[key file name] -C [user]
And add them to metadata.
Once done, check if you are able to ssh the instance using the internal IP. I was able to login successfully and also synced two directories using the rsync command with the internal IP.
rsync -v -e ssh ~/[source dir]/* [user]#[internal IP of instance B]:~/[destination dir]