vm instance failed to connect to backend with ssh - google-cloud-platform

I created a VM instance in Google cloud and configured it properly with all the necessary software. then, I cloned its disk and created a new VM instance, utilizing the cloned disk; however, when I tried to connect ot the new instance via the SSH button, it does not succeed with a code 4003. Reason: failed to connect with backend. Connection via Cloud Identity-Aware Proxy Failed.

When an instance does not have a public IP address, SSH in a Browser needs to forward the SSH connection through IAP. The error "failed to connect to backend" indicates that the IAP proxy service was unable to open a TCP connection to the instance.
Ensure you have a firewall rule to allow Cloud Identity-Aware Proxy (IAP) to connect to port 22 on the instance.
Create a firewall rule
To allow IAP to connect to your VM instances, create a firewall rule that:
applies to all VM instances that you want to be accessible by using IAP. \
allows ingress traffic from the IP range 35.235.240.0/20. This range contains all IP addresses that IAP uses for TCP forwarding. \
allows connections to all ports that you want to be accessible by using IAP TCP forwarding, for example, port 22 for SSH and port 3389 for RDP.
To allow RDP and SSH access to all VM instances in your network, do the following:
Open the Firewall Rules page and click Create firewall rule
Configure the following settings:
Name: allow-ingress-from-iap
Direction of traffic: Ingress
Target: All instances in the network
Source filter: IP ranges
Source IP ranges: 35.235.240.0/20
Protocols and ports: Select TCP and enter 22,3389 to allow both RDP and SSH.
Click Create.
In Case you haven't enable your IAP you may refer on this link Enabling IAP for Compute Engine. You may check/browse also other related IAP guides on the left hand pane.

Related

GCP - Unable to access instance via SSH

I am unable to access my VM instance on Google Cloud Platform and I have the issue isolated I believe to the VPC firewall rules. If I allow all ingress traffic (0.0.0.0/0) then obviously I can access the instance via SSH, however if I replace 0.0.0.0/0 with my exact IPv4 address, I receive the following:
No ingress firewall rule allowing SSH found.
If the project uses the default ingress firewall rule for SSH,
connections to all VMs are allowed on TCP port 22. If the VPC network
that the VM’s network interface is in has a custom firewall rule, make
sure that the custom firewall rule allows ingress traffic on the VM’s
SSH TCP port (usually, this is TCP port 22).
I get my IP address from browsing (whatsmyipaddress on Google) as well as the following below in terminal, so I know I have my IPv4 public IP address correct:
dig +short myip.opendns.com #resolver1.opendns.com
I am unsure as to why when I use my public IP address as 'allow ingress' to match on all ports with my exact IP, I am not allowed in, but a simple switch to 0.0.0.0/0 life is great. Any help would be appreciated.
Seems as though when you connect by SSH using the browser, the IP address that instantiates the in-browser SSH connection is a Google IP, which seems to be the reason I am unable to connect, given the firewall rules I had set in place.

EC2 Instance cannot connect to github using SSH

I'm trying to deploy a web with Laravel Forge and AWS. I created an EC2 instance using Laravel Forge control panel. I created a security group for this instance.
Outbund rules
Inound rules v1
Inbound rules v2
All SHH connections allowed are described in this Laravel Forge guide:
https://forge.laravel.com/docs/1.0/servers/providers.html
So, the problem is when I try to install the repository I get this error into EC2 instance.
SHH error
I also checked that my instance's SHH public key is registered in my github account
Your Outbound rules are permitting connections on port 80 (HTTP) and port 443 (HTTPS).
However, SSH uses port 22. This is causing the connection to fail.
You should add port 22 to the Outbound rules.
However, it is generally considered acceptable to Allow all outbound connections from an Amazon EC2 instance since you can 'trust' the software running on the instance. I would recommend allowing all outbound connections rather than restricting it to specific ports.

Aws Connection to EC2 timed out over SSH

I have tried to connect EC2 using SSH but ssh: connect to host XXXXXXXXX port 22: Connection timed out
Note: XXXXXXXX is user#IP
Also I have checked security groups. Inbound rules are allowed for ssh
SSH TCP 22 0.0.0.0/0 -
SSH TCP 22 ::/0 -
For first time, I was able to login using SSH. After that I installed LAMP stack on EC2 instance. I think I forgot to add ssh in ufw rules.
I can't able to connect using Browser Based SSH Connection in AWS and showing erros for Session Manager connection method.
How can I connect using SSH or other, so I can allow SSH in ufw rules.
This indicates that you cannot to the host.
From your question I can see you have validated security group access, however there are some other steps you should take to investigate this:
Is the IP address a public IP? If so ensure that the instances subnet has a route table with a Internet Gateway associated with it.
Is the IP address a private IP? Do you have a VPN or Direct Connect connection to the instance? If not you would either need to set this up or use a bastion host. Otherwise if you do ensure that the route tables reference back to your on premise network range.
If you're not using the default NACLs for your subnet check that both the port ranges from your security group as well as ephemeral port ranges.

connecting to VM instance having no external IP

I am trying to connect to a google cloud VM instance having no external IP address via cloud shell and cloud SDK.
Google document says that we can connect it using IAP
Connecting through IAP: refer using IAP
a) Grant the roles/iap.tunnelResourceAccessor role to the user that wants to connect to the VM.
b) Connect to the VM using below command
gcloud compute ssh instance-name --zone zone
OR
Using IAP for TCP forwarding: refer using TCP forwarding
we can also connect by setting a ingress firewall rule for IP '35.235.240.0/20' with port TCP:22
and select a IAM role Select Cloud IAP > IAP-Secured Tunnel User
what's the difference between these two different approach and what's the difference in these two separate IAM roles
roles/iap.tunnelResourceAccessor
IAP-secured Tunnel User
I am new to cloud so please bear with my basic knowledge.
It's exactly the same thing. Look at this page
IAP-Secured Tunnel User (roles/iap.tunnelResourceAccessor)
You have the display name of the role: IAP-Secured Tunnel User that you see in the GUI, and you have the technical name of the role roles/iap.tunnelResourceAccessor that you have to use in the script and CLI
The link mentioned in the question ("refer using IAP") actually points to the
Connecting to instances that do not have external IP addresses > Connecting through a bastion host.
Connection through a bastion host is another method apart from access via IAP.
As described in the document Connecting to instances that do not have external IP addresses > Connecting through IAP,
IAP's TCP forwarding feature wraps an SSH connection inside HTTPS.
IAP's TCP forwarding feature then sends it to the remote instance.
Therefore both parts of the question (before OR and after OR) belong to the same access method: Connect using Identity-Aware Proxy for TCP forwarding. Hence the answer to the first question is "no difference" because all of that describes how the IAP TCP forwarding works and those are the steps to set it up and use:
1. Create a firewall rule that:
applies to all VM instances that you want to be accessible by using IAP;
allows ingress traffic from the IP range 35.235.240.0/20 (this range contains all IP addresses that IAP uses for TCP forwarding);
allows connections to all ports that you want to be accessible by using IAP TCP forwarding, for example, port 22 for SSH.
2. Grant permissions to use IAP:
Use GCP Console or gcloud to add a role IAP-Secured Tunnel User (roles/iap.tunnelResourceAccessor) to users.
Note: Users with Owner access to a project always have permission to use IAP for TCP forwarding.
3. Connect to the target VM with one of the following tools:
GCP Console: use the SSH button in the Cloud Console;
gcloud compute ssh INSTANCE_NAME
There's an important explanation of how IAP TCP forwarding is invoked for accessing a VM instance without Public IP. See Identity-Aware Proxy > Doc > Using IAP for TCP forwarding:
NOTE. If the instance doesn't have a Public IP address, the connection automatically uses IAP TCP tunneling. If the instance does have a public IP address, the connection uses the public IP address instead of IAP TCP tunneling.
You can use the --tunnel-through-iap flag so that gcloud compute ssh always uses IAP TCP tunneling.
As already noted by guillaume blaquiere, roles/iap.tunnelResourceAccessor and IAP-secured Tunnel User are not the different IAM roles, but the Role Name and the Role Title of the same Role. There is one more resource that represents this in a convenient form:
Cloud IAM > Doc > Understanding roles > Predefined roles > Cloud IAP roles

Accessing RDS through bastion host with port forwarding not working

I'm trying to establish a port forwarding to my RDS in a private subnet via a bastion host in a public subnet with the following command:
ssh -A -NL 3007:mydb3.co2qgzotzkku.eu-west-1.rds.amazonaws.com:3306 ubuntu#ec2-562243-250-177.eu-west-1.compute.amazonaws.com
but cant get a connection to the rds instance.
The security group for the Bastion Host allows only SSH on port 22 from my IP
and the security group for the RDS allows traffic from the bastion hosts security group and SSH from my iP
Besides the ACL for the subnets are open to all traffic for TCP.
anybody a tip what is missing to get the tunnel running?
merci A
I think you are missing the port 3306 and 3307. Allow that port in the both security group and it will work.
As you said you are accessing the bastion via key-pair, your new command must be:
ssh -N -L 3007:mydb3.co2qgzotzkku.eu-west-1.rds.amazonaws.com:3306 ubuntu#ec2-562243-250-177.eu-west-1.compute.amazonaws.com -i /path/to/key.pem
I would suggest removing A from the command as it Enables forwarding of the authentication agent connection. This can also be specified on a per-host basis in a configuration file.
Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's UNIX-domain socket) can access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the keys that enable them to authenticate using the identities loaded into the agent.