Unable to connect to amazon EC2 instance via PuTTY - amazon-web-services

I created a new instance of Amazon EC2 in Amazon Web Services (AWS) by referring to the documentation. I even added a SSH rule like this:
Port: 22
Type: SSH
Source: <My IP address>/32
I downloaded the .pem file, converted it into .ppk file by using PuTTYGEN. Then I added host name in PuTTY like this:
ec2-user#<public_DNS>
I selected default settings, added that .ppk file to PuTTY, logged in and I got this error:
Even trouble shooting link didn't help me.
I'm also getting this error in system logs:
How can I connect to my Amazon EC2 instance via PuTTY?

Things to check when trying to connect to an Amazon EC2 instance:
Security Group: Make sure the security group allows inbound access on the desired ports (eg 80, 22) for the appropriate IP address range (eg 0.0.0.0/0). This solves the majority of problems.
Public IP Address: Check that you're using the correct Public IP address for the instance. If the instance is stopped and started, it might receive a new Public IP address (depending on how it has been configured).
VPC Configuration: Accessing an EC2 instance that is launched inside a Virtual Private Cloud (VPC) requires:
An Internet Gateway
A routing table connecting the subnet to the Internet Gateway
NACLs (Network ACLS) that permit through-traffic
If you are able to launch and connect to another instance in the same subnet, then the VPC configuration would appear to be correct.
The other thing to check would be the actual configuration of the operating system on the instance itself. Some software may be affecting the configuration so that the web server / ssh daemon is not working correctly. Of course, that is hard to determine without connecting to the instance.
If you are launching from a standard Amazon Linux AMI, ssh would work correctly anytime. The web server (port 80) would require installation and configuration of software on the instance, which is your responsibility to maintain.

Ajay,
Try this. Go to your VPC dashboard. Click on Network ACLs - on the associated acl, update your Inbound Rules to allow SSH access on port22.

Go to vpc attached to instance and then add entry to route table with
0.0.0.0/0 - Destination
Internet Gateway of your VPC - As Target
Save It and try to connect it.

Go to VPC --> Security Group --> Edit inbound rules --> make the ssh source ip (anywhere) then save it and try to login with your putty-client. finally go back to your security group inbound rules and change the source IP from (anywhere) to (my ip) or any custom IP do you want then save it.
note: I assume that you have successfully stored and converted your private key

Security Group - This must accept traffic from your IP address
ex:
Protocol - SSH, PORT-22, IPAddress - SOME IP ALLOW
All Traffic On Any Port From 0.0.0.0/0 means from any IP Address ALLOW
Route Table - Make Sure you have outgoing traffic route enabled
ex:
Destination - 0.0.0.0
target- internet gateway
Use or generate private key

I struggled with this problem for ages after my EC2 instance suddenly started refusing a connection. I tried every answer on SO and Google but nothing helped!
The fix was to make sure that the Network ACL inbound rules were updated to match the rules on the security group.
I have no clue why it worked yesterday and stopped today, but this fixed it.

Related

Connecting to an RDS Instance that is on VPC

I am trying to connect to my AWS RDS Mariadb instance that I am hosting on us-east/ohio from my local machine. I am trying to avoid making the instance publicly available but I am struggling to get this connection to work. Right now I am trying to connect from my local machine but eventually, I hope to host a nodejs server to talk to it on a static ip.
The setup I have now is the following:
A single VPC that my RDS is connected to which includes a CIDR that contains my public ip x.y.z.0/24
A route table which includes my public ip to connect to local
Network ACL inbound and outbound rule number 1 is to allow All TCP from 0.0.0.0/0
The Default security group which also allows all inbound and outbound traffic
A VPC endpoint attached to the RDS service
With all of this set up I figured it should allow anyone that has the DNS name of my VPC endpoint to talk to my RDS instance but I can not get a connection to my instance. I have used every DNS name associated with my endpoint and every single one of them times out when I try to sign into the database. I have been fumbling with this for days and would like to get past this point of initial setup.
Things possibly to note:
The Network ACL comes with a default rule of "*" deny all traffic. I do not know what order that rule is evaluated. I chose 1 for my rule of allowing all but I have also tried rule 100. Neither seems to work.
I know my RDS instance is on us-east-2a and I have made sure to add the us-east-2a subnet to my VPC endpoint. Using the DNS name that includes that at one point was giving me network unreachable for a little bit before I realized the subnet ID I chose was not the default which just gave me a timeout again.
I am trying to use DBeaver to connect to the VPC endpoint but I have also used the console command mysql -h vpce-<random characters>-<VPC ID>-us-east-2a.rsa.us-east-2.vpce.amazonaws.com -u admin -p and gotten the same timeout

Unable to access AWS EC2 instance

I am unable to access an AWS AMI instance even after setting the inbound rules to allow all traffic:
I get this error:
This site can’t be reached
X.XX.XXX.XX refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
How can I fix this?
I would:
Make sure your inbound rules are as you shown and that your outbound rules do allow all traffic to exit.
In the EC2 Dashboard click on the Instances (running) and then click on the Instance ID. Click on the VPC ID for that instance and then on Main network ACL. Click now on the Network ACL ID and confirm your Inbound rules, Outbound rules and Subnet associations. Make sure nothing here is blocking access. By default the Inbound and Outbound rules will allow all traffic and all subnets will be there.
You do not say so, but I imagine you have SSH access to the instance. Make sure HTTP and HTTPS services are running and listening for connections on the interface IP address and not on 127.0.0.1; something like this:
Make sure IPtables is not blocking access. If you have existing rules you may want to clear them so that they look like:
Run tcpdump and look for traffic on ports 80 or 443
If still not working... make sure you are accessing the right IP address; If you're not using an elastic IP and your restarted the instance it will have a new public IP address.
If this is a NAT instance, you must stop source / destination checking. A NAT instance must be able to send and receive traffic when the source or destination is not itself.
Is your EC2 on a VPC that permits public IP addresses? This can commonly happen when you have accidentally attached the EC2 to a private VPC.
If this is the case make an AMI of the EC2 and re-create it on the public VPC.
Edit:... I had perhaps assumed the issue was simpler than it might be, Dan M explains how to ensure that the HTTP and HTTPS daemon are running, but you could also confirm that it's working "correctly" by running curl http://localhost from the EC2 itself... if this returns the HTML you're expecting then I would recommend going to AWS VPC Network Reachability Analyzer - https://eu-west-2.console.aws.amazon.com/vpc/home?region=eu-west-2#ReachabilityAnalyzer (but you'll need to select the correct region obvs) and create a "path" to test, when this fails (assuming it fails) the report should tell you everything you need to know, and if you're unsure about how to interpret this, post it in here.
NB: perhaps create a path from the internet gateway to the network interface on your EC2 webserver, and define the Destination port - optional as 80.

Not able to ssh/http into EC2 instance

I am at my wits end with this, please help.
I am creating EC2 instances in my default public VPC, yet i am not able to ssh or http to my instance or webserver running into the machine. I checked the following
The SG has inbound SSH, HTTP and HTTPS allowed from 0.0.0.0/0 and assigned to my instance
the default VPC, has route tables with 0.0.0.0/0 pointed to IGW
the NACLs are configured to Allow all traffic. i also manually updated to allow only HHTP, HTTPS and SSH
the Key is use has been given the right permission by running chmod 400 filename
Despite all this not able to connect to the EC2 instance, AMI being Amazon Linux 2 AMI
When I try to ssh, i get a connection timeout error after a while, initially, i thought it was my office network but I am also getting the same from my home network with no firewalls in place
To allow an SSH connection, you will need:
An Amazon EC2 instance running Linux launched in a public subnet (defined as having a Route Table that directs 0.0.0.0/0 to an Internet Gateway)
A Security Group permitting Inbound access on port 22 (Outbound configuration is irrelevant)
Network ACLs left at their default settings of Allow All in both directions
A Public IP address associated with the instance
From your descriptions, I would say that the problem is probably with the Outbound NACLs. Return traffic from an SSH session goes back to the source port on the initiating server, which is not port 22. In general, only change the NACLs if you have a specific reason, such as creating a DMZ. I recommend you reset the NACL rules to Allow All traffic in both directions.

Unable to SSH to my EC2 instance despite adding my IP in the security group route table

I have tried all that I could have done.
Deleted the previous EC2 instances
Used a new key pair
Used putty to connect with new pair
Used chrome extension secure shell app to connect to EC2 instance with new key pair
I added my IP address in my security group inbound table but not able to access the EC2 instances.
Attached are the images of my issues.
Cause of the problem:
The port number for SSH is 22.
However, the screenshot for the ssh error shows that the connection is being attempted on port 80.
Suggested fix:
The problem can be fixed by specifying the port number as '22' in the SSH client connection settings.
To access the EC2 instance via SSH, check:
The instance has been launched in a public subnet (defined as having a Route Table that routes traffic to an Internet Gateway)
The Security Group should be permitting inbound traffic on port 22 from your IP address (or a wider range, such as 0.0.0.0/0)
Don't change the NACLs from default
Make sure the instance is running Linux
For EC2 Instance Connect, make sure it is using Amazon Linux 2 or Ubuntu 16.04 or later
Make sure you are connecting to the public IP address of the instance (based on your pictures, you are doing this)
Simple hint: If the connection takes a long time to fail (or hangs), then there is no network connectivity to the instance. Check Security Groups and VPC configurations. If an error comes back immediately, then network connectivity is okay and the connection is simply being refused by the instance.

Cannot connect to AWS RDS mariadb instance

We have a working AWS RDS instance. But I cannot connect to this database with its proper credentials. The security group has the private ip range from where I'm trying to access.
tracert --> This command returns 'Request timed out' after a 3 hops.
What am I missing?
Note : There are people who can connect to the database already. Somewhere there IP is whitelisted and mine is not?
From AWS RSD click on "VPC security groups" and change the source of the inbound traffice of the associated EC2 to anywhere. Obviously the issue was the security group Source IP address.
Can you please check if the ip address of the machines that can successfully connect and that of your machine lie within the same CIDR configured in the security group ? Also confirm if your local machine has some firewall preventing you from outbound traffic in the specified port ?