unable to connect ssh after run command sudo ufw allow 'Nginx Full' - amazon-web-services

Anybody please help me
Im unable to connect my server after run this command sudo ufw allow 'Nginx Full'.
In aws is there any option to undo this changes or anything else
Thanks in advance

Stop the running EC2 instance
Detach its /dev/sda1 volume (let's call it volume A)
Start the new t1.micro EC2 instance, create it on the same subnet, otherwise you will have to terminate the instance and create it again.
Attach volume A to the new micro instance, as /dev/xvdf
SSH to the new micro instance and mount volume A to /mnt/tmp
Disable UFW by setting ENABLED=no in /mnt/tmp/etc/ufw/ufw.conf
Exit
Terminate micro instance
Detach volume A from it
Attach volume A back to the main instance as /dev/sda1 Start the main instance
Login as before
Source

If you have server backup, try restoring to that backup.
If not, try looking at AWS Troubleshooting Guide.
Please post your error or logs upon connecting. Can't help much without logs.

After struggling for 2 days I found few easy alternatives, here are those:
Use AWS session manager to connect with out ssh or key (yt)
Use EC2 serial console
Update the user instance details (link)

Related

Unable to SSH into EC2 server after reboot

I have an ubuntu 18.04 based EC2 instance using an Elastic IP Address. I am able to SSH into the instance without any problems.
apt is executing some unattended updates on the instance. If I reboot the system after the updates, I am no longer able to SSH into the system. I am getting the error ssh: connect to host XXX port 22: Connection refused
Few points:
Even after the updates, I am able to SSH before the reboot
Method of restart does not make a difference. sudo shutdown -r now and EC2 dashboard have the same result.
There are no problems with sshd_config. I've detached the volume and attached it to a new working instance. sshd -t did not report any problems either
I am able to do sudo systemctl restart ssh.service after the updates but before the system restart.
I've tried with and without Elastic IP. Same result
From the system logs, I see that SSH is trying to start, but failing for some reason
I want to find out why the ssh daemon is not starting. Any pointers?
Update:
System Logs
Client Logs
No changes in the security groups before and after reboot
EC2 > Network & Security > Security Groups > Edit inblound rules > SSH 0.0.0.0/0
Step 1: EC2 > Instances > Actions > Image and templates > Create image
Step 2: Launch a new instance using the AMI image.
I missed the error Failed to start Create Static Device Nodes in /dev. in system logs. The solution given at https://askubuntu.com/questions/1301750/ubuntu-16-04-failed-to-start-create-static-device-nodes-in-dev helped solve my problem

Cannot access GCP VM instance

I've been trying to connect to a VM instance for the past couple of days now. Here's what I've tried:
Trying to SSH into it returns username#ipaddress: Permission denied (publickey).
Using the Google Cloud SDK returns this:
No zone specified. Using zone [us-central1-a] for instance: [instancename].
Updating project ssh metadata...done.
Waiting for SSH key to propagate.
SFATAL ERROR: No supported authentication methods available (server sent: publickey)
ERROR: (gcloud.compute.ssh) Could not SSH into the instance. It is possible that your SSH key has not propagated to theinstance yet. Try running this command again. If you still cannot connect, verify that the firewall and instance are set to accept ssh traffic.
Using the browser SSH just gets stuck on "Transferring SSH keys to the VM."
Using PuTTy also results in No supported authentication methods available (server sent: publickey)
I checked the serial console and found this:
systemd-hostnamed.service: Failed to run 'start' task: No space left on device
I did recently resize the disk and did restart the VM, but this error still occurs.
Access to port 22 is allowed in the firewall rules. What can I do to fix this?
After increasing the disk size you need to reboot the instance so the filesystem can be resized, just in this specific case because you already ran out of space.
If you have not already done so, create a snapshot of the VM's boot disk.
Try to restart the VM.
If you still can't access the VM, do the following:
Stop the VM:
gcloud compute instances stop VM_NAME
Replace VM_NAME with the name of your VM.
Increase the size the boot disk:
gcloud compute disks resize BOOT_DISK_NAME --size DISK_SIZE
Replace the following:
BOOT_DISK_NAME: the name of your VM's boot disk
DISK_SIZE: the new larger size, in gigabytes, for the boot disk
Start the VM:
gcloud compute instances start VM_NAME
Reattempt to SSH to the VM.

AWS LightSail SSH says UPSTREAM_NOT_FOUND And Also not able to connect by PUTTY

ssh -i "LightsailDefaultKey-ap-south-1.pem" bitnami#[ip-of-lightsail-instance]
ssh: connect to host 6[ip-of-lightsail-instance] port 22: Connection timed out
UPSTREAM_NOT_FOUND
An error occurred and we were unable to connect or stay connected to your instance. If this instance has just started up, try again in a minute or two.
UPSTREAM_NOT_FOUND [519]
PUTTY says
Connection Timeout
Create a snapshot and add script
sudo ufw disable
sudo iptables -F
sudo mv /etc/hosts.deny /etc/hosts.deny_backup
sudo touch /etc/hosts.deny
echo "Port 22" >> /etc/ssh/sshd_config
systemctl restart sshd
sudo systemctl enable sshd
sudo systemctl restart sshd
Wait 10-15minutes. Done! Issue fixed :-)
Ran into the same problem. Managed to log in after rebooting from the browser. My problem started after some upgrades and updates and heavy installations that took up most of my 512MB memory. The solution going forward is to create a swapfile to improve the performance of the system.
I struggled with this 519 Upstream Error for several days in Lightsail as well. I could not connect via SSH-in-Browser, nor via SSH in my Mac Terminal. It simply timed out.
However, I found a solution that works -
In short, you can:
Create a snapshot of the "broken server"
Export to Amazon EC2 (think of EC2 as a more manipulatable version of Lightsail)
Create a new volume from that snapshot in EC2, and mount it on a new machine image
There is a great video here that I followed step by step:
https://www.youtube.com/watch?v=EN2oXVuOTSo
After following those steps, I was able to SSH in to the new volume, and recover all my data in the /mnt directory. You may need to change some permissions with chown to access the data.
Were you able to make your instance working, or it was just data retrieval?
For only data and files, EC2 is not required. You can use AWS cli to create disksnapshot, then create disk, and attach to any instance, mount and then access files.
My SSH connection was done after changing SSH port, just a reboot and use the new port I could connect again.
If this still didn't work, you could resort to the official Adding a Lightsail instance to Systems Manager while launching section, I followed to create a new instance from the snapshot, the new instance is reachable by SSH.

SSH back to AWS Lightsail after UFW enabling

I've enabled the UFW service without allowing SSH access before then logging off.
I am now unable to ssh back to the instance.
The steps I have already taken:
Made a snapshot and create a new instance from it
You could create a snapshot and use it to create another new instance, and add sudo service ufw stop to the launch script.
The general point: try to execute a command (firewall disabling) while creating a new instance. I was able to do it with the usage of AWS CLI:
aws lightsail create-instances-from-snapshot --region eu-central-1 --instance-snapshot-name TEST_SSH_FIX-1591429003 --instance-names TEST_SSH_FIX_2 --availability-zone eu-central-1a --user-data 'sudo service ufw stop' --bundle-id nano_2_0
It`s work.

AWS EC2 Instance connection reset by port 22

I have a aws ec2 p3.2xlarge instance. I can ssh and connect to it easily. However about after 20 minutes, while I am running a keras model on it, it resets the connection and I am kicked out with the error Connection reset by 54.161.50.138 port 22. I then am able to reconnect, but have to start training the model over again because my progress was lost. This happens every time I connect to the instance. Any idea why this is happening?
For ssh I am using gow which lets me run linux commands on windows - https://github.com/bmatzelle/gow/wiki
I checked my public ip address before and after the reset and it was the same.
I also looked at the cpu usage using amazon CloudWatch, and it was normal - 20%.
I figured out a partial solution to this. In the instance terminal follow the following steps.
run the command "tmux"
in the new shell that pops up, execute the job
detach from the tmux shell by using the shortcut (Ctrl+b then d)
if the ssh connection resets, ssh to the instance again and run "tmux attach"
the job should have kept on running and you can resume where you left off