SSH back to AWS Lightsail after UFW enabling - amazon-web-services

I've enabled the UFW service without allowing SSH access before then logging off.
I am now unable to ssh back to the instance.
The steps I have already taken:
Made a snapshot and create a new instance from it

You could create a snapshot and use it to create another new instance, and add sudo service ufw stop to the launch script.

The general point: try to execute a command (firewall disabling) while creating a new instance. I was able to do it with the usage of AWS CLI:
aws lightsail create-instances-from-snapshot --region eu-central-1 --instance-snapshot-name TEST_SSH_FIX-1591429003 --instance-names TEST_SSH_FIX_2 --availability-zone eu-central-1a --user-data 'sudo service ufw stop' --bundle-id nano_2_0
It`s work.

Related

AWS LightSail SSH says UPSTREAM_NOT_FOUND And Also not able to connect by PUTTY

ssh -i "LightsailDefaultKey-ap-south-1.pem" bitnami#[ip-of-lightsail-instance]
ssh: connect to host 6[ip-of-lightsail-instance] port 22: Connection timed out
UPSTREAM_NOT_FOUND
An error occurred and we were unable to connect or stay connected to your instance. If this instance has just started up, try again in a minute or two.
UPSTREAM_NOT_FOUND [519]
PUTTY says
Connection Timeout
Create a snapshot and add script
sudo ufw disable
sudo iptables -F
sudo mv /etc/hosts.deny /etc/hosts.deny_backup
sudo touch /etc/hosts.deny
echo "Port 22" >> /etc/ssh/sshd_config
systemctl restart sshd
sudo systemctl enable sshd
sudo systemctl restart sshd
Wait 10-15minutes. Done! Issue fixed :-)
Ran into the same problem. Managed to log in after rebooting from the browser. My problem started after some upgrades and updates and heavy installations that took up most of my 512MB memory. The solution going forward is to create a swapfile to improve the performance of the system.
I struggled with this 519 Upstream Error for several days in Lightsail as well. I could not connect via SSH-in-Browser, nor via SSH in my Mac Terminal. It simply timed out.
However, I found a solution that works -
In short, you can:
Create a snapshot of the "broken server"
Export to Amazon EC2 (think of EC2 as a more manipulatable version of Lightsail)
Create a new volume from that snapshot in EC2, and mount it on a new machine image
There is a great video here that I followed step by step:
https://www.youtube.com/watch?v=EN2oXVuOTSo
After following those steps, I was able to SSH in to the new volume, and recover all my data in the /mnt directory. You may need to change some permissions with chown to access the data.
Were you able to make your instance working, or it was just data retrieval?
For only data and files, EC2 is not required. You can use AWS cli to create disksnapshot, then create disk, and attach to any instance, mount and then access files.
My SSH connection was done after changing SSH port, just a reboot and use the new port I could connect again.
If this still didn't work, you could resort to the official Adding a Lightsail instance to Systems Manager while launching section, I followed to create a new instance from the snapshot, the new instance is reachable by SSH.

SSH issues on GCP VM migrated from AWS

I have migrated an EC2 instance (amazon linux) to Google cloud (ubuntu 18.04) using cloud endure.
But I am not able to ssh into google cloud VM. I dont have EC2 instance anymore. How can I access the Google cloud VM ? Error message:
ERROR: (gcloud.beta.compute.ssh) [/usr/bin/ssh] exited with return code [255]
using gcloud command you can config your SSH
gcloud compute config-ssh
for more details on config-ssh ref :
Link
If the gcloud compute config-ssh doesn't work check the firewall rules for your machine; find the VPC it's in and make sure port 22 is open - it may happen it's blocked.
If you're not sure if SSH can come through create a rule for it.
Very similar issue was also discussed in this topic on StackOverflow which might help you.
You can (to be absolutely sure SSH traffic is allowed to your VM) set up a startup script for it: edit the VM in question and find "Custom Metadata" section and click "Add Item", next type startup-script as a key and the command sudo ufw allow ssh in the "value" field.
Having the SSH traffic enabled in the GCP firewall and the VM itself you should be able to log in.

Locked out of ssh from server

I was logged into my AWS EC2 server via ssh. I ran iptables -P INPUT DROP to check something and I forgot to enable port 22 so that I could keep my ssh connection.
Is there something I can do to regain back the connection?
You can use AWS System Manager Session Manager if your server has the AWS SSM agent installed on the EC2 server and the correct IAM permissions, etc.
Or you could use AWS Systems Manager Run Command to run a single command to fix the iptables, if you have the AWS SSM agent installed on the EC2 server.
Otherwise, you didn't save the iptables rules, so they should reset back to the previous settings if you reboot the server.

unable to connect ssh after run command sudo ufw allow 'Nginx Full'

Anybody please help me
Im unable to connect my server after run this command sudo ufw allow 'Nginx Full'.
In aws is there any option to undo this changes or anything else
Thanks in advance
Stop the running EC2 instance
Detach its /dev/sda1 volume (let's call it volume A)
Start the new t1.micro EC2 instance, create it on the same subnet, otherwise you will have to terminate the instance and create it again.
Attach volume A to the new micro instance, as /dev/xvdf
SSH to the new micro instance and mount volume A to /mnt/tmp
Disable UFW by setting ENABLED=no in /mnt/tmp/etc/ufw/ufw.conf
Exit
Terminate micro instance
Detach volume A from it
Attach volume A back to the main instance as /dev/sda1 Start the main instance
Login as before
Source
If you have server backup, try restoring to that backup.
If not, try looking at AWS Troubleshooting Guide.
Please post your error or logs upon connecting. Can't help much without logs.
After struggling for 2 days I found few easy alternatives, here are those:
Use AWS session manager to connect with out ssh or key (yt)
Use EC2 serial console
Update the user instance details (link)

How to launch amazon ec2 instance inside vpc using chef without using a gateway machine?

I am using chef to create amazon EC2 instances inside a VPC. I have alloted an elastic IP to new instance using --associate-eip option in knife ec2 server create. How do I bootstrap it without a gateway machine? It gets stuck at "Waiting for sshd" as it uses the private IP of newly created server to ssh into it, though it has an elastic IP allocated?
Am I missing anything? Here is the command I used.
bundle exec knife ec2 server create --subnet <subnet> --security-group-ids
<security_group> --associate-eip <EIP> --no-host-key-verify --ssh-key <keypair>
--ssh-user ubuntu --run-list "<role_list>"
--image ami-59590830 --flavor m1.large --availability-zone us-east-1b
--environment staging --ebs-size 10 --ebs-no-delete-on-term --template-file
<bootstrap_file> --verbose
Is there any other work-around/patch to solve this issue?
Thanks in advance
I finally got around the issue by using the --server-connect-attribute option, which is supposed to be used along with a --ssh-gateway attribute.
Add --server-connect-attribute public_ip_address to above knife ec2 create server command, which will make knife use public_ip_address of your server.
Note: This hack works using knife-ec2 (0.6.4). Refer def ssh_connect_host here
Chef will always use the private IP while registering the EC2 nodes. You can get this working by having your chef server inside the VPC as well. Definitely not a best practice.
The other workaround is, let your chef server be out side of VPC. Instead of bootstrapping the instance using knife ec2 command follow the instructions over here.
This way you will bootstrap your node from the node itself and not from the Chef-server/workstation.