Google Cloud Compute Instance, IPv6 - google-cloud-platform

I currently have a google cloud compute instance set up to be the back-end for a multiplayer game. Certain publishers and app stores that I'm trying to publish the game on require that the server can be reached via a client using an IPv6 address, which makes perfect sense. So the question is, how do I go about making it that the compute instance can be connected to via IPv6?
It's worth noting that the connection between the client and server is done via UDP, so using load balancing doesn't appear to work (since load balancers in google cloud can only be done over TCP, from what I can tell).
Has anyone else had this issue, and if so how did you solve it?
Many thanks in advance.

IPv6 Termination for HTTP(S), SSL Proxy, and TCP Proxy Load Balancing is currently in Beta.
https://cloud.google.com/compute/docs/load-balancing/ipv6
Configuring IPv6 termination for your load balancers lets your backend instances appear as IPv6 applications to your IPv6 clients.
Note: The documentation says this feature is not covered by any SLA or deprecation policy and may be subject to backward-incompatible changes.
The definition of Beta from their documentation: Beta is the point at which we are ready to open a release for any customer to use. There are no SLA or technical support obligations in a Beta release, and charges may be waived in some cases. Products will be complete from a feature perspective, but may have some open outstanding issues. Beta releases are suitable for limited production use cases.
https://cloud.google.com/terms/launch-stages

IPv6 Termination for HTTP(S), SSL Proxy, and TCP Proxy Load Balancing became GA on September 20, 2017.
Source: https://cloudplatform.googleblog.com/2017/09/announcing-ipv6-global-load-balancing-ga.html.
See the documentation at https://cloud.google.com/compute/docs/load-balancing/ipv6
Keep in mind that inside the GCP network, all is still on IPv4, https://issuetracker.google.com/issues/35904387

Google cloud now supports external ipv6 on VM instances. Each instance can get a /96 external ip range and it can be used to access internet (without NAT) or be used for VM to VM traffic.
At this moment (July 2021) it's only supported limited regions:
asia-east1
asia-south1
europe-west2
us-west2
See more detailed in
https://cloud.google.com/compute/docs/ip-addresses/configure-ipv6-address https://cloud.google.com/vpc/docs/vpc#ipv6-addresses
If your instance happened to be one of the 4 regions above then you should be able to use the VM instance IPv6 feature.

May 2022 update.
Per https://cloud.google.com/vpc/docs/subnets#limitations
Internal and external IPv6 subnets are available in all but asia-southeast2 and asia-northeast3 regions.

Related

Google Cloud Redis - IP Address changed without warning

TLDR: I could use some advice on how to setup Redis for production use on GPC, it just switched IP addresses on us randomly, and there is nothing in the documentation about that / I have no idea how to build a stable solution with that possibility.
Background:
We've been using google cloud for a few years and had a stable Redis Memorystore instance on the 'Standard' Tier.
In the past few days, our web servers started slowly crashing every so often. After investigating, something was locking up when connecting to celery / Redis, and we found that all our config files had 10.0.0.3 as the Redis instance, and the IP address for the server was listed as 10.0.0.4. This hasn't changed ever, and our configs are in git so we're sure they were unchanged.
Since Celery won't boot up with a bad connection we know it was correct on Tuesday when we pushed up new code. It seems like the server failed over and somehow issued an IP address change on us. As evidence,
Our graphical usage bizarrely change color at a specific point
Which matches our error logs "[2020-06-16 03:09:21,873: ERROR/MainProcess] Error in timer: ReadOnlyError("You can't write against a read-only slave.",)"
All the documentation we have found says the IP address would stay the same, but given that didn't happen, I'm hoping for some feedback on how one would work around a non-static IP in this case on GPC
Memorystore does not support static IP address. Some scenarios where IP address change can occur are restarts or when connection modes are changed.
From review of the Memorystore for Redis networking page, when using direct access connection via IP address your project will set up a VPC network peering connection with Google's internal project, where the instance is managed. This will create an allocated IP range for Memorystore to use for the instances, this can either be provided by you or picked from the available space (will be a /29 block by default).
On the other hand, Memorystore for Redis exposes the uptime as a metric that is available through Cloud Monitoring (formally Stackdriver). This can be used as a health check for the instance as you will be able to determine if there has been a restart or points of unavailability.
Following the point above, you are able to set up an alert on the uptime metric directly in Cloud Monitoring. Unfortunately there is nothing specific to IP address changes though.

Google Cloud SCTP

I am trying to test SCTP traffic from the internet to instances within GCP but this is not working, checking through firewall documentation, is it safe to conclude that GCP does not allow SCTP traffic from the internet to instances?
If this is true, what is the rationale behind this? SCTP is a major protocol that is used in telecom.
Google blocks all traffic going in and out of VM instances to the Internet. This is done most probably for security reasons maybe realiablity and ease of managing such a vast infrastructure as Google has.
The image you posted (from GCP firewall documentation) says, that direct use of SCTP outside GCP network is blocked. For the moment you can go to the Issue Tracker and create a feature request for this functionality.
As a workaraound you can always try to tunnel it inside other protocols (like #John Hanley suggested) or use VPN. Nothing else comes to mind.

How i can configure Google Cloud Platform with Cloudflare-Only?

I recently start using GCP but i have one thing i can't solve.
I have: 1 VM + 1 DB Instance + 1 LB. DB instance allow only conections from the VM IP. bUT THE VM IP allow traffic from all ip (if i configure the firewall to only allow CloudFlare and LB IP's the website crash and refuse conections).
Recently i was under attack, i activate the Cloudflare ddos mode, restart all and in like 6 h the attack come back with the Cloudflare activate. Wen i see mysql conections bump from 20-30 to 254 and all conections are from the IP of the VM so i think the problem are the public accesibility of the VM but i don't know how to solved it...
If i activate my firewall rules to only allow traffic from LB and Cloudflare the web refuses all conections..
Any idea what i can do?
Thanks.
Cloud Support here, unfortunately, we do not have visibility into what is installed on your instance or what software caused the issue.
Generally speaking you're responsible for investigating the source of the vulnerability and taking steps to mitigate it.
I'm writing here some hints that will help you:
Make sure you keep your firewall rules in a sensible manner, e.g. is not a good practice to have a firewall rule to allow all ingress connections on port 22 from all source IPs for obvious reasons.
Since you've already been rooted, change all your passwords: within the Cloud SQL instance, within the GCE instance, even within the GCP project.
It's also a good idea to check who has access to your service accounts, just in case people that aren't currently working for you or your company still have access to them.
If you're using certificates revoke them, generate new ones and share them in a secure way and with the minimum required number of users.
Securing GCE instances is a shared responsability, in general, OWASP hardening guides are really good.
I'm quoting some info here from another StackOverflow thread that might be useful in your case:
General security advice for Google Cloud Platform instances:
Set user permissions at project level.
Connect securely to your instance.
Ensure the project firewall is not open to everyone on the internet.
Use a strong password and store passwords securely.
Ensure that all software is up to date.
Monitor project usage closely via the monitoring API to identify abnormal project usage.
To diagnose trouble with GCE instances, serial port output from the instance can be useful.
You can check the serial port output by clicking on the instance name
and then on "Serial port 1 (console)". Note that this logs are wipped
when instances are shutdown & rebooted, and the log is not visible
when the instance is not started.
Stackdriver monitoring is also helpful to provide an audit trail to
diagnose problems.
You can use the Stackdriver Monitoring Console to set up alerting policies matching given conditions (under which a service is considered unhealthy) that can be set up to trigger email/SMS notifications.
This quickstart for Google Compute Engine instances can be completed in ~10 minutes and shows the convenience of monitoring instances.
Here are some hints you can check on keeping GCP projects secure.

How to set up Tomcat session state in AWS EC2 for failover and security

I am setting up a Tomcat application in EC2. For reliability, I am running two or more instances. If one server goes down, my users should be redirected to the other instance. This suggests that session state should be kept in an external source, or mirrored between the servers.
AWS offers a hosted service, Elasticache, which seems like it would work well. I even found a nice library, memcached-session-manager. However, I soon ran into some issues.
Unless someone can convince me otherwise, I need the session states to be encrypted in transit. Otherwise someone could intercept the network traffic and pretend to be someone else on my site. I don't see any built-in Amazon method to keep traffic off the internet. (Is peering available here?)
The library mentioned earlier does have Redis support with SSL, but it does not support a Redis cluster. Someone put in a pull request for this but it has not been incorporated and this library is a complex build. I may talk myself into living without the cluster, but that puts us back at a single point of failure.
Tomcat is running on EC2 in your VPC, and ElastiCache is in your VPC. Your AWS VPC is an isolated network. Nobody can intercept the traffic between the EC2 and Elasticache servers unless your VPC network becomes compromised in some way.
If you want to use Redis instead, with SSL connections, then I believe at this time you would need a Tomcat Session Manager implementation that uses Jedis. This one uses Jedis, but you would need to upgrade the version of Jedis it uses in order to use SSL connections.

Connect via VPN to third party from AWS

We have a number of 3rd party systems which are not part of our AWS account and not under our control, each of these systems have an internal iis server set up with dns which is only available from the local computer. This iis server holds an API which we want to be able to utilise from our EC2 instances.
My idea is to set up some type of vpn connection between the ec2 instance and the 3rd party system so that the ec2 instance can use the same internal dns to call the api.
AWS provide direct connect, is the correct path go down in order to do this? If it is, can anyone provide any help on how to move forward, if its not, what is the correct route for this?
Basically we have a third party system, on this third party system is an IIS server running some software which contains an API. So from the local machine I can run http://<domain>/api/get and it returns a JSON lot of code. However in order to get on to the third party system, we are attached via a VPN on an individual laptop. We need our EC2 instance in AWS to be able to access this API, so need to connect to the third party via the same VPN connection. So I think I need within AWS a separate VPC.
The best answer depends on your budget, bandwidth and security requirements.
Direct Connect is excellent. This services provides a dedicated physical network connection from your point of presence to Amazon. Once Direct Connect is configured and running your will then configure a VPN (IPSEC) over this connection. Negative: long lead times to install the fibre and relatively expensive. Positives, high security and predicable network performance.
Probably for your situation, you will want to consider setting up a VPN over the public Internet. Depending on your requirements I would recommend installing Windows Server on both ends linked via a VPN. This will provide you with an easy to maintain system provided you have Windows networking skills available.
Another good option is OpenSwan installed on two Linux system. OpenSwan provides the VPN and routing between networks.
Setup times for Windows or Linux (OpenSwan) is easy. You could configure everything in a day or two.
Both Windows and OpenSwan support a hub architecture. One system in your VPC and one system in each of your data centers.
Depending on the routers installed in each data center, you may be able to use AWS Virtual Private Gateways. The routers are setup in each data center with connection information and then you connect the virtual private gateways to the routers. This is actually a very good setup if you have the correct hardware installed in your data centers (e.g. a router that Amazon supports, which is quite a few).
Note: You probably cannot use a VPN client as the client will not route two networks together, just a single system to a network.
You will probably need to setup a DNS Forwarder in your VPC to communicate back to your private DNS servers.
Maybe sshuttle can do, what you need. Technically you can open ssh tunnel between your EC2 and remote ssh host. It can also deal with resolving dns requests at remote side. That is not perfect solution, since typical VPN has fail over, but you can use it as starting point. Later, maybe as foll back, or for testing purposes.