Gitlab on a Compute Engine instance is not available - google-cloud-platform

We deployed the Gitlab server on a Compute Engine instance with an attached static external IP address. From time to time, the server is unavailable to us upon requests from Russia. When using the "tracert" command, packets do not go beyond the address 209.85.243.152 (Google LLC). However, the Gitlab site is fully accessible upon requests from other regions at any time.

The problem was preinstalled sshguard, which mistakenly added my IP to the list of blocked ones.
There is a solution instruction in this answer.

Related

How can I deploy and connect to a postgreSQL instance in AlloyDB without utilizing VM?

Currently, I have followed the google docs quick start docs for deploying a simple cloud run web server that is connected to AlloyDB. However, in the docs, it all seem to point towards of having to utilize VM for a postgreSQL client, which then is connected to my AlloyDB cluster instance. I believe a connection can only be made within the same VPC and/or a proxy service via the VM(? Please correct me if I'm wrong)
I was wondering, if I only want to give access to services within the same VPC, is having a VM a must? or is there another way?
You're correct. AlloyDB currently only allows connecting via Private IP, so the only way to talk directly to the instances is within the same VPC. The reason all the tutorials (e.g. https://cloud.google.com/alloydb/docs/quickstart/integrate-cloud-run, which is likely the quickstart you mention) talk about a VM is that in order to create your databases themselves within the AlloyDB cluster, set user grants, etc, you need to be able to talk to it from inside the VPC. Another option for example, would be to set up Cloud VPN to some local network to connect your LAN to the VPC directly. But that's slow, costly, and kind of a pain.
Cloud Run itself does not require the VM piece, the quickstart I linked to above walks through setting up the Serverless VPC Connector which is the required piece to connect Cloud Run to AlloyDB. The VM in those instructions is only for configuring the PG database itself. So once you've done all the configuration you need, you can shut down the VM so it's not costing you anything. If you needed to step back in to make configuration changes, you can spin the VM back up, but it's not something that needs to be running for the Cloud Run -> AlloyDB connection.
Providing public ip functionality for AlloyDB is on the roadmap, but I don't have any kind of timeframe for when it will be implemented.

External requests in Cloud Run project

Currently my projects in Cloud Run that make external requests come out with random IP from Google IP's pool.
A new micro-service that I am developing that needs to make an external request on a critical external micro-service that is limited by IP.
Google Cloud Platform has any solution to channel the output from a specific IP to the outside? Some kind of proxy for these kinds of needs?
Thanks
As clarified in this other case here, there is no way to directly setup a static or specific IP for outbound requests for Cloud Run. The only possibility as clarified in this answer from a Google's developer, unless Cloud Run starts supporting Cloud NAT or Serverless VPC Access, you won't be able to achieve such configuration.
There are some workarounds.
One of them would be to create a SOCKS proxy by running a ssh client that routes the traffic through a GCE VM instance that has a static external IP address. More details here.
Another solution is to send your outbound requests through a proxy that has a static IP. You can get details here.
Both these two were provided by developers from Google, so they should be good to go and use it.

Google Cloud Redis - IP Address changed without warning

TLDR: I could use some advice on how to setup Redis for production use on GPC, it just switched IP addresses on us randomly, and there is nothing in the documentation about that / I have no idea how to build a stable solution with that possibility.
Background:
We've been using google cloud for a few years and had a stable Redis Memorystore instance on the 'Standard' Tier.
In the past few days, our web servers started slowly crashing every so often. After investigating, something was locking up when connecting to celery / Redis, and we found that all our config files had 10.0.0.3 as the Redis instance, and the IP address for the server was listed as 10.0.0.4. This hasn't changed ever, and our configs are in git so we're sure they were unchanged.
Since Celery won't boot up with a bad connection we know it was correct on Tuesday when we pushed up new code. It seems like the server failed over and somehow issued an IP address change on us. As evidence,
Our graphical usage bizarrely change color at a specific point
Which matches our error logs "[2020-06-16 03:09:21,873: ERROR/MainProcess] Error in timer: ReadOnlyError("You can't write against a read-only slave.",)"
All the documentation we have found says the IP address would stay the same, but given that didn't happen, I'm hoping for some feedback on how one would work around a non-static IP in this case on GPC
Memorystore does not support static IP address. Some scenarios where IP address change can occur are restarts or when connection modes are changed.
From review of the Memorystore for Redis networking page, when using direct access connection via IP address your project will set up a VPC network peering connection with Google's internal project, where the instance is managed. This will create an allocated IP range for Memorystore to use for the instances, this can either be provided by you or picked from the available space (will be a /29 block by default).
On the other hand, Memorystore for Redis exposes the uptime as a metric that is available through Cloud Monitoring (formally Stackdriver). This can be used as a health check for the instance as you will be able to determine if there has been a restart or points of unavailability.
Following the point above, you are able to set up an alert on the uptime metric directly in Cloud Monitoring. Unfortunately there is nothing specific to IP address changes though.

AWS Migration - Hardcoded IP addresses

We are looking to migrate to AWS in the start of the new year.
One of the issues we have is the fact that some of the applications that we will be migrating over have been configured with hardcoded IP addresses (DB Hostnames).
We will be using ELB's to fully utilise the elasticity and dynamic nature of AWS for our infrastructure. With this in mind, those IP addresses that were static before will now be dynamic (so frequently assigned new IPs).
What is the best approach to solving these hardcoded values?
In particular IP addresses? I appreciate usernames, passwords etc. can be placed into a single config file and called using ini function etc.
I think one solution could be:
1) To make an AWS API call to query what the IP address of the host is? Then call the value that way.
Appreciate any help with this!
You should avoid hard code IP addresses, and use the hostname for the referenced resource. With either RDS or a self hosted DB running on EC2, you can use DNS to resolve the IP by host name at run time.
Assuming you are using CodeDeploy to provision the software, you can use the CodeDepoly Lifecycle Event Hooks to configure the application after the software as been installed. And after install hook could be configured to retrieve your application parameters, and make them available to the application before it starts.
Regarding the storage of application configuration data, consider using the AWS Parameter Store. Using this as a secure and durable source of application configuration data, you can retrieve the DB host address and other application parameters at software provision time, using the features of CodeDeploy mentioned above.

About region / zone of Google Compute Engine's VM

I have an instance which was created in Australia region, the zone is australia-southeast1-a. However, I find that the External IP is still in US:
Had tried creating another instance in another region (asia), and logined using ssh, haven't noticed any significant difference in latency, the responses are both not very fast.
My question is, have I correctly setup the region to Australia? Or is there any configuration that I have missed?
Your setup on the VM and its configuration are perfectly fine. You have your hardware physically located in the Australian region. Your concern over IP’s location is just a mere confusion. This had happened to most of the customers.
Most of the external Geo IP services are depending upon the SWIP database. And for this reason, most of the Google’s IPs are SWIP’ed to the Mountain View, CA. Because of this, even for a VM which is created outside (in your case Australian Region) the US shows its IP location as in the US.
Furthermore, you can also go through this Google discussion thread which will give you more comments on this matter.