Can't connect to my VM on GCP. Everything was OK before I stopped it and turned on again. Here's below syslog1 from GCP console from machine that I can't connect to.
Here's below syslog2 from newly created machine, with network interfaces started normally.
Related
I have disabled the network card of my virtual machine in Google Cloud (right clic - Disable). Now it is impossible for me to reconnect or reactivate it. I'm new to this and can't figure out how to reactivate it.
If anyone has the solution, it would be helpful.
You cannot reactivate the interface because the vm just lost control with Google cloud when you deactivated the network interfacte.
You need to connect to the machine using the machine serial port (like in the old days).
Open the VM from the web interface and click in "edit".
Then select "Enable connecting to serial ports " (it is the first thing you can choose), and save the changes.
Open again the VM and wou'll see in "Remote Access" you can SSH to the machine AND connect to the serial port.
Once you have serial port access, you can log in.
If you don't have a user in the VM (because you used your GCP user) you'll need to reboot the VM while you're connected to the VM using the serial console and do a root password recovery.
I recently deployed the EVE-NG VM into GCP. In short this VM allows me to test routers and switches which are virtual. These virtual devices run within EVE-NG. I would like for some of these virtual devices to communicate with other VMs that I have setup in GCP. So far I have not had any success. I've read the following article on how to do this: https://openeye.blog/2020/04/21/deploying-eve-ng-on-google-cloud-platform-part-3/ but I am still not able to ping a router from my other Linux VM or vice versa. Do I need to create a 2nd VPC network or something? I've very new to GCP in general.
We have created a Ubuntu-based GCP VM instance which has 2xNvidiaT4 GPU. We have noticed that after a while, it stops responding. However on the GCP console, the status shows as Running; but when we try to access via GCP SSH also it doesn't respond. When we STOP n START, it works fine.
What could be the issue?
Lose control of the VM Instance Debian 9 in Google Compute Engine when I try to connect to a VPN Service Provider (NordVPN).
I have an active subscription with NordVPN and I have always used this VPN without problems, both from Windows, from Mobile, and from Linux on-premises virtual machines.
Now I find myself for a project to use it on different VM Debian 9 machines in Google Cloud.
I installed the client (the test was done both with the custom client of the vendor, and with openVPN with the list of the vendor's servers) but when I go to connect between the VM and the VPN I lose control of the machine, the terminal hangs. This problem does not occur if I use a local VM instead.
I can no longer ping it from both the internal address and the external address.
Premise that I am not a networking master.
The test was done both with the IP Forwarding enabled and disabled at the time the VM was created.
I only find material online to create a VPN server within GCP but it is not my case.
My situation is instead that the VM is the client and the VPN server is external.
No doubt I believe this situation is possible but I cannot understand that further settings I have to enter with respect to the local VM.
Thank you all.
It’s seems that the VPN Client is receiving network routes from your VPN Provider so the VM is routing all traffic through the VPN so all inbound connections are being dropped.
You best chance to know what’s going on inside your VM once the network access is not available, as you described, is to interact with the Serial Console [1]. In [1] you can find step by step how to access your VM using the serial console through your Google Cloud Platform panel.
Now, in GCP (normally) all VM have only 1 vNIC and is through that vNIC that all traffic is being routed. When you connect your VM to NordVPN a new Network device is created (tun). If your default route [4] is set to send all default traffic to your tun (the NordVPN) and not the vNIC of GCP, when a new SYNC [5] request gets to your VM, your VM will send the ACK answer through your network card TUN (NordVPN) and not to the ETH0 (Google VM). Because the connection did not began through NordVPN, NordVPN will drop the connection.
The behavior I explained is totally expected, because you want that all traffic from your VM go to NordVPN so you can surf the net anonymously. The disadvantage is that your VM will not be able to receive incoming traffic.
[1] https://cloud.google.com/compute/docs/instances/interacting-with-serial-console
[2] https://help.ubuntu.com/community/OpenVPN
[3] https://nordvpn.com/es/tutorials/linux/openvpn/
[4] https://www.cyberciti.biz/faq/howto-debian-ubutnu-set-default-gateway-ipaddress/
[5] https://www.inetdaemon.com/tutorials/internet/tcp/3-way_handshake.shtml
[6] https://community.openvpn.net/openvpn/wiki/IgnoreRedirectGateway
I am checking Cloud SQL Private IP connections from different types of clients. I could successfully make a connection from an application hosted in a GKE cluster which was created as a VPC-native cluster as described here. Having already done this, I was expecting it would be easier to connect to the Private IP from the same application (which is a simple Spring Boot application) hosted in a GCE VM. Contrary to my expectations this does not appear to be so. It is the same Spring Boot application that I am trying to run inside a VM. But it does not seem to be able to connect to the database. I was expecting some connection error but nothing shows up - no exception thrown. What is strange is I am able connect to the Cloud SQL Private IP via mysqlcommand line from the same VM but not from within the Spring Boot application. Anyone out there who faced this before?
Issue was not related Cloud SQL Private IP. As mentioned in my earlier comment, I was passing active profile info via Kubernetes pod configuration. So the Dockerfile did not have this info. To fix the issue, I had to pass active profile info when the program was initialized outside Kubernetes. This has a lot of helpful answers how to do this. If the program is being started via a docker run command the active profile info can be passed as a command line argument. See here for a useful reference.
So to summarize, Cloud SQL Private IP works fine from a CE VM. No special configuration required at GCE VM end to get this working.