I created a VM, assigned external IP to it in us-west1 region. Also i did assign a subnet to VM.
I created a firewall rule with options as direction-ingress, Source filters - 0.0.0.0/0, Protocols and ports - tcp:22, Targets - All Instances in the network.
Using gcloud command gcloud compute ssh --zone "xxxx" "xxxxxxxx" --project "xxxxxx" when i try to ssh into a VM, it is not allowing me to connect getting below error
ssh: connect to host xx.xx.xx.xx port 22: Operation timed out
Recommendation: To check for possible causes of SSH connectivity issues and get
recommendations, rerun the ssh command with the --troubleshoot option.
gcloud compute ssh xxxxxxx --project=xxxxxx --zone=xxxxxx --troubleshoot
Or, to investigate an IAP tunneling issue:
gcloud compute ssh xxxxxxx --project=xxxxxxx --zone=xxxxx --troubleshoot --tunnel-through-iap
Related
I have a private cluster with a pod running which prints a file to a network printer. The printer is accessible from VM's outside the cluster but it is not accessible from the pod. I am unable to ping the IP as well. DO we need some additional configuration in GKE to access the printer or say any application .?
Here are the common things to check and do when you can't access services outside of your GKE cluster:
Check if you can access the service from the GKE VM directly by SSHing into the VM and then doing ping/curl to verify connectivity
Verify that the ip-masq-agent is running on the GKE cluster and that it's configured correctly
Configure Cloud NAT for the network when you're using Private Clusters so you can still access internet resources
See more details for each these steps below:
Verifying connectivity from the GKE node itself
SSH into the node and run ping / curl by running:
gcloud compute ssh my-gke-node-name --tunnel-through-iap
curl https://facebook.com
ping my-network-printer
Verify ip-masq-agent configuration
Check if the ip-masq-agent Daemonset is running:
kubectl get ds ip-masq-agent -n kube-system
Verify that the ip-masq-agent configuration is set to ensure all RFC1918 addresses get masqueraded except the GKE node CIDR and pod CIDR:
kubectl describe configmaps/ip-masq-agent -n kube-system
Note the default configuration of ip-masq-agent most of the time has too many RFC1918 addresses included in the nonMasqueradeCIDRs setting. You need to ensure your external network printer isn't included in any of nonMasqueradeCIDRs ranges.
If it is included or the no CIDRs are set, then you should set the nonMasqueradeCIDRs to only include the GKE node CIDR and the GKE pod CIDR for your cluster. You can edit the configmap by running:
kubectl edit configmap ip-masq-agent --namespace=kube-system
More docs on GKE ip-masq-agent here: https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent#edit-ip-masq-agent-configmap
I'm attempting to scp to an EC2 inside a VPC and getting timed out.
Established facts:
I can ssh into the VPC itself - the keypair works and the instance subnet is open to the internet.
The folder I'm attempting to transfer to on the EC2 has permissions 700
The command I'm running is:
scp -i mykey.pem dumbtest.txt ubuntu#ec2-<my-ip>.compute-1.amazonaws.com:/home/ubuntu
Are there additional steps I need to take to scp into EC2's on a VPC?
I tried to access from GCE instance to Cloud SQL instance, which is private and having private service connection.
following private services access docs, I setup VPC and FW, and create SQL and GCE in same VPC.
https://cloud.google.com/vpc/docs/configure-private-services-access
but in GCE, ping to SQL instance, nor sql connection didn't work.
create VPC
gcloud compute networks create test-custom-vpc --subnet-mode=custom --bgp-routing-mode=global --mtu=1460
create subnet
gcloud compute networks subnets create vpc-sb-1 --network=test-custom-vpc --range=10.100.0.0/16 --region=asia-northeast1
create IP range for private service connection
gcloud compute addresses create vpc-peering-range --global --purpose=VPC_PEERING
--addresses=192.168.0.0 --prefix-length=16 --description=description --network=test-custom-vpc
create VPC peering for SQL
gcloud services vpc-peerings connect --service=servicenetworking.googleapis.com --ranges=vpc-peering-range --network=test-custom-vpc --project=my-project
create mySQL in VPC
gcloud --project=my-project beta sql instances create vpc-sql-1 --network=test-custom-vpc --no-assign-ip
create GCE instance in VPC
gcloud compute instances create vm-in-sb-1 --subnet vpc-sb-1 --zone asia-northeast1-b
create FW rule, so far allow all IP/protocol
gcloud compute firewall-rules create allow-all --network test-custom-vpc --direction ingress --action allow --rules all
Then, I would access VM with ssh and check connection between VM & SQL
gcloud sql instances list
NAME DATABASE_VERSION LOCATION TIER PRIMARY_ADDRESS PRIVATE_ADDRESS STATUS
vpc-sql-1 MYSQL_5_7 us-central1-b db-n1-standard-1 - 192.168.0.3 RUNNABLE
-> SQL private IP is 192.168.0.3
ssh login
gcloud beta compute ssh --zone "asia-northeast1-b" "vm-in-sb-1" --project "my-project"
check connection
ping 192.168.0.3
no response
psql -h 192.168.0.3 -U postgres
mysql --host=192.168.0.3 --user=root --password
psql: could not connect to server: Connection timed out
Is the server running on host "192.168.0.3" and accepting
TCP/IP connections on port 5432?
I have no idea what configuration is wrong
I replicated your case, all configuration are working well but please note, using the command below in step #5 will create a Cloud SQL instance for Mysql not for Postgres:
gcloud --project=my-project beta sql instances create vpc-sql-1 --network=test-custom-vpc --no-assign-ip
If you want to create a Cloud SQL instance for Postgres use the command below:
gcloud --project=my-project beta sql instances create vpc-sql-1 --database-version=POSTGRES_12 --cpu=2 --memory=7680MB --network=test-custom-vpc --no-assign-ip
The problem is you are connecting to Cloud SQL for Mysql using Postgres database client. To proper connect use the following example:
for Mysql example:
mysql --host=192.168.0.3 --user=root --password
for Postgres example:
psql -h 192.168.0.3 -U postgres
I am trying to provision 2 ec2 instances on a private subnet using Ansible playbooks. My infrastructure includes:
Bastion Host on a public subnet
2 EC2 instances on 2 private subnets
NAT Gate for outgoing connections
Application Load Balancer
My question is how to run the Ansible playbook from localhost to affect the private instances. Can I SSH forward the playbook or does the playbook have to reside in the bastion host and then use the private IPs as hosts?
Create ssh-config file ~/.ssh/config and then add the following line to config file
host bastion
HostName bastion_ip
User bastion_user
identityFile ~/.ssh/mykey.pem
host private_instance
HostName 10.0.0.11
user private_ec2_user
ProxyCommand ssh bastion -W %h:%p
identityFile ~/.ssh/mykey.pem
My question is how to run the Ansible playbook from localhost to
affect the private instances.
Now you have configured ssh config file all you need to type
ssh private_instance
this will create SSH tunneling to your private instance, you do not need complex or lengthy command to type every time.
Ansible allows the use of SSH configuration options and ProxyCommand can come to rescue when trying to forward the command from bastion to private subnet hosts. Here is an example
ssh -o ProxyCommand="ssh ubuntu#52.50.10.5 'nc 192.168.0.20 22'" ubuntu#nothing
The above command will, for example, first connect to 52.50.10.5 via SSH, and then open a socket to 192.168.0.20 on port 22. The socket connection (which is connected to the remote SSH server) is then passed to the original SSH client command invocation to utilize.
Source : https://spin.atomicobject.com/2016/05/16/ansible-aws-ec2-vpc/
We are unable to connect to 'VM'via ssh instance on Google Cloud platform.
Here we are trying with the help of 'SSH' button available on the browser.
But following message is received:
We are unable to connect to the VM on the port 22.
We have tried to Stop and Start the VM but did not help.
You need to create a firewall rule that enables SSH access on port 22 for your VMs. It is better to make the 'Target' as a network tag instead of enabling SSH access for all of the machines on your VPC network.
You can use the CLI to perform this operation - using the default VPC
gcloud compute firewall-rules create <rule-name> --allow tcp:22 --network "default" --source-ranges "<source-range>"