I am trying to simulate an on-premises solution on GCP.
I am not able to bridge with the GCE NIC and get DHCP working on that.
I have isolated the issue and also successfully tests the similar thing on a sandboxed Vagrant (VirtualBox) setup.
Both approaches are scripted and available on the following repos:
https://github.com/htssouza/ovs-gcp-issue
The DHCP functionality for Compute Engine only provides and manages the IP address for the instance itself. It does not function as a general purpose DHCP server for other clients running hosted inside the instance.
Related
Currently, I have followed the google docs quick start docs for deploying a simple cloud run web server that is connected to AlloyDB. However, in the docs, it all seem to point towards of having to utilize VM for a postgreSQL client, which then is connected to my AlloyDB cluster instance. I believe a connection can only be made within the same VPC and/or a proxy service via the VM(? Please correct me if I'm wrong)
I was wondering, if I only want to give access to services within the same VPC, is having a VM a must? or is there another way?
You're correct. AlloyDB currently only allows connecting via Private IP, so the only way to talk directly to the instances is within the same VPC. The reason all the tutorials (e.g. https://cloud.google.com/alloydb/docs/quickstart/integrate-cloud-run, which is likely the quickstart you mention) talk about a VM is that in order to create your databases themselves within the AlloyDB cluster, set user grants, etc, you need to be able to talk to it from inside the VPC. Another option for example, would be to set up Cloud VPN to some local network to connect your LAN to the VPC directly. But that's slow, costly, and kind of a pain.
Cloud Run itself does not require the VM piece, the quickstart I linked to above walks through setting up the Serverless VPC Connector which is the required piece to connect Cloud Run to AlloyDB. The VM in those instructions is only for configuring the PG database itself. So once you've done all the configuration you need, you can shut down the VM so it's not costing you anything. If you needed to step back in to make configuration changes, you can spin the VM back up, but it's not something that needs to be running for the Cloud Run -> AlloyDB connection.
Providing public ip functionality for AlloyDB is on the roadmap, but I don't have any kind of timeframe for when it will be implemented.
I recently deployed the EVE-NG VM into GCP. In short this VM allows me to test routers and switches which are virtual. These virtual devices run within EVE-NG. I would like for some of these virtual devices to communicate with other VMs that I have setup in GCP. So far I have not had any success. I've read the following article on how to do this: https://openeye.blog/2020/04/21/deploying-eve-ng-on-google-cloud-platform-part-3/ but I am still not able to ping a router from my other Linux VM or vice versa. Do I need to create a 2nd VPC network or something? I've very new to GCP in general.
I currently develop a small Java web application with following stack: Java 8, Spring Boot, Hibernate, MariaDB, Docker, AWS (RDS, Fargate, etc.). I use AWS to deploy and to run my application. My java web application runs inside of the docker container, which is managed by AWS Fargate; this web application communicates with Amazon RDS (MariaDB instance) via injected secrets and doesn't need to go through public internet for this kind of communication (instead it uses VPC). My recent problems have begun after I've managed to roll out an software update, that enforced me to make some manual database changes with use of MySQL Workbench and I could not perform this because of local connectivity problems.
Therefore my biggest problem right now is the connectivity to the database from the local machine - I simply can't connect to the RDS instance via MySQL Workbench or even from within the IDE (but it used to work before without such problems). MySQL Workbench gave me following error message as a hint:
After check of given hints from MySQL Workbench I've also checked that:
I use valid database credentials, URL and port (the app in Fargate has the same secrets injected)
Public accessibility flag on RDS is (temporarily) set to "yes"
database security group allows MySQL/Aurora connections from my IP Address range (I've also tested the 0.0.0.0/0 range without further luck)
Therefore my question is: what else should I check to find out the reason of my connectivity failure?
After I've changed my laptop network by switching to the mobile internet the connectivity problem was solved - therefore I suspect, that my laptop was not able to establish the socket connection from the previous network (possibly the communication port or DNS was blocked).
Therefore also don't forget to check the network connectivity by establishing a socket connection like it is described in this answer.
Lose control of the VM Instance Debian 9 in Google Compute Engine when I try to connect to a VPN Service Provider (NordVPN).
I have an active subscription with NordVPN and I have always used this VPN without problems, both from Windows, from Mobile, and from Linux on-premises virtual machines.
Now I find myself for a project to use it on different VM Debian 9 machines in Google Cloud.
I installed the client (the test was done both with the custom client of the vendor, and with openVPN with the list of the vendor's servers) but when I go to connect between the VM and the VPN I lose control of the machine, the terminal hangs. This problem does not occur if I use a local VM instead.
I can no longer ping it from both the internal address and the external address.
Premise that I am not a networking master.
The test was done both with the IP Forwarding enabled and disabled at the time the VM was created.
I only find material online to create a VPN server within GCP but it is not my case.
My situation is instead that the VM is the client and the VPN server is external.
No doubt I believe this situation is possible but I cannot understand that further settings I have to enter with respect to the local VM.
Thank you all.
It’s seems that the VPN Client is receiving network routes from your VPN Provider so the VM is routing all traffic through the VPN so all inbound connections are being dropped.
You best chance to know what’s going on inside your VM once the network access is not available, as you described, is to interact with the Serial Console [1]. In [1] you can find step by step how to access your VM using the serial console through your Google Cloud Platform panel.
Now, in GCP (normally) all VM have only 1 vNIC and is through that vNIC that all traffic is being routed. When you connect your VM to NordVPN a new Network device is created (tun). If your default route [4] is set to send all default traffic to your tun (the NordVPN) and not the vNIC of GCP, when a new SYNC [5] request gets to your VM, your VM will send the ACK answer through your network card TUN (NordVPN) and not to the ETH0 (Google VM). Because the connection did not began through NordVPN, NordVPN will drop the connection.
The behavior I explained is totally expected, because you want that all traffic from your VM go to NordVPN so you can surf the net anonymously. The disadvantage is that your VM will not be able to receive incoming traffic.
[1] https://cloud.google.com/compute/docs/instances/interacting-with-serial-console
[2] https://help.ubuntu.com/community/OpenVPN
[3] https://nordvpn.com/es/tutorials/linux/openvpn/
[4] https://www.cyberciti.biz/faq/howto-debian-ubutnu-set-default-gateway-ipaddress/
[5] https://www.inetdaemon.com/tutorials/internet/tcp/3-way_handshake.shtml
[6] https://community.openvpn.net/openvpn/wiki/IgnoreRedirectGateway
I am checking Cloud SQL Private IP connections from different types of clients. I could successfully make a connection from an application hosted in a GKE cluster which was created as a VPC-native cluster as described here. Having already done this, I was expecting it would be easier to connect to the Private IP from the same application (which is a simple Spring Boot application) hosted in a GCE VM. Contrary to my expectations this does not appear to be so. It is the same Spring Boot application that I am trying to run inside a VM. But it does not seem to be able to connect to the database. I was expecting some connection error but nothing shows up - no exception thrown. What is strange is I am able connect to the Cloud SQL Private IP via mysqlcommand line from the same VM but not from within the Spring Boot application. Anyone out there who faced this before?
Issue was not related Cloud SQL Private IP. As mentioned in my earlier comment, I was passing active profile info via Kubernetes pod configuration. So the Dockerfile did not have this info. To fix the issue, I had to pass active profile info when the program was initialized outside Kubernetes. This has a lot of helpful answers how to do this. If the program is being started via a docker run command the active profile info can be passed as a command line argument. See here for a useful reference.
So to summarize, Cloud SQL Private IP works fine from a CE VM. No special configuration required at GCE VM end to get this working.