Every time I destroy my compute instance and create new one, the same ephemeral external IP address is assigned. I haven't used static ip. But always same set of ip addresses is being assigned. How do I get completely random external ip on my instances?
P.S. I use default network interface which is already there in the cloud console.
If an instance is stopped, any ephemeral external IP addresses assigned to the instance are released back into the general Compute Engine pool and become available for use by other projects. When a stopped instance is started again, a new ephemeral external IP address is assigned to the instance. There is no guarantee that you will get a specific IP every time or will get completely different IP address always. You may get the same IP or even could get a completely new IP from the pool. You can find detail at this link.
If your use case is to assign multiple external IP addresses to a single instance, you can set up multiple forwarding rules to point to a single target instance using protocol forwarding.
Related
Ephemeral external IP addresses: these addresses are available to VM
instances and forwarding rules. Ephemeral external IP addresses remain
attached to a VM instance only until the VM is stopped and restarted
or the instance is terminated. If an instance is stopped, any
ephemeral external IP addresses that were assigned to the instance are
released back into the general Compute Engine pool and become
available for use by other projects. When a stopped instance is
started again, a new ephemeral external IP address is assigned to the
instance.
I have thought is instantaneously and we won't be getting back the same ephemeral IP address once the VM is stopped. However, it seems that ephemeral external IP addresses will still be assigned to a project for a while longer. For instance, I deleted a VM and released an IP address two days ago. The same IP address is assigned to a new VM today.
How long does it take to release the ephemeral IP addresses back to the GCE pool?
Are the ephemeral external IP addresses "assigned" to VPC or Project? If one has multiple projects within a VPC, will the ephemeral IP addresses be rotated within the projects until they are released to the GCE pool?
The answer is "it should not matter if ephemeral addresses are reused for your instance or not".
There is no guarantee that your instance will obtain the same address or will not obtain the same address. If you are designing something that depends on a certain ephemeral IP address behavior, your design will fail at some point.
Your question quotes Google's official ephemeral IP address policy. Design to that statement and do not depend on environment level behavior.
If you require a fixed IP address, then assign your instance a static IP address. Otherwise, your instance will have whatever address Google Cloud decides, which may or may not be the same address between restarts or recreates.
If your goal is to randomize public IP addresses, then you cannot count on the ephemeral address behavior to implement that. You can allocate a set of static IP addresses and then change which one is assigned to an instance. Note that unused IP addresses are billed (not free). Another method is to create instances in different regions and zones which will have different public IP addresses. You also could write a script to create VMs until the address is different (not part of a previous set of addresses) and then delete the other VMs (subject to quota restrictions).
From a customer support perspective, it is to Google's benefit to reallocate the same address to you. This minimizes a common problem. Some customers do not understand that the default IP address is ephemeral and what that means. They reboot their instance and the IP address changes. That breaks their SSH scripts, DNS settings, etc. The ephemeral address behavior cannot be consistently deployed but is a positive where possible.
Answers to your questions -
It’s instantaneous.
Ephemeral addresses don't belong to any project and can be assigned to any resources across projects, randomly.
Some users have done some tests like this and concluded that the system tends to assign familiar IP addresses to VMs if it can. However, this is not something confirmed by GCP. Regardless, it’s of no use as there is no guarantee what IP address you’ll be getting.
In Google Cloud PLatform can we use multiple external public IP addresses and map them to the instance Alias IP addresses both being part of NIC0 VM instance behind GCP external network load balancer? (this way we could publish multiple services each with different public to private IP mapping, but the Google documentation states that this is not the case)
https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address
Yes, it's possible to use multiple external IPs pointing a single NIC0 interface as per:
To assign multiple external IP addresses to a single instance, you can
set up multiple forwarding rules to point to a single target instance
using protocol
forwarding.
Meaning that you would only need to create a pool and a regional forwarding rule.
Keep in mind that external traffic is going to be natted at some point, so it would look like all request are hitting your internal IP address directly, for example external IP 2.2.2.2, 3.3.3.3 and 4.4.4.4 will be translated into something like 10.128.0.2.
This will only work for the main NIC0 IP.
Actually when I create a new machine in GCP, the google compute engine is assigning to me an IP 1xx.
Only I have arround 30 machines, but, when kubernetes generate a new node, the IP is incremented.
How I can reset this IP pool?
For example now I'm trying to generate a marketplace deploy (rabbitmq) and isn't possible to reserve internal IP.
TL;DR: CGE Instances supports two types of internal IP addresses: Static Internal and Ephemeral. For both cases, IPs remain attached to a resource until they are explicitly detached from the resource. So if you can't reserve more IP addresses is because you ran out of IP addresses.
Google released a document named IP addresses where it explains how IP addresses are assigned to resources within GCP.
Based on your question, seems that you have custom VPC. The error you get is because you ran out of IP addresses.
Ephemeral IP addresses are released when the resource is deleted. But Static Internal Addresses are not automatically deleted when the resource is deleted, it only gets detached. Most likely you have a lot of reserved addresses not attached to resources.
Lastly, Stackoverflow may be not the best forum for this question. I highly recommend you to share your question in Serverfault since is a better forum for infrastructure questions.
I want to use Google Cloud instance as the VPN server with multiple external IP addresses.
What is the maximum number of external IPs I can use for one Google Cloud instance? (In documentation it is mentioned that "The maximum number of network interfaces per instance is 8" but I'm not sure is it mean I have a limit of 8 IP per instance or 8 subnets with lot of IPs )
Also, this is probably the dumbest question (I'm totally new to cloud computing area) but if for example, one external IP of the instance is 1.1.1.1 Does it mean I can connect to instance from internet by this IP as well as if some software run on the instance and connect another server it will show in log that connection was from 1.1.1.1 ?
A compute engine can have multiple network interfaces. Each network interface can have BOTH an internal and external IP address. This means that if there is limit of 8 network interfaces, you can only have 8 external IP addresses.
(Source: https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address)
It is my understanding that if you have an internal IP address associated with a network interface of a Compute Engine (say 1.1.1.1) and then associate that interface with an external IP address then any traffic arriving at the compute engine through the external IP address will appear (to the Compute Engine) will log as though it had been sent to the internal IP address.
For our project we need a static IP binding to our Google Cloud VM instance due to IP whitelisting.
Since it's a managed group preemptible, the VM will terminate once in a while.
However, when it terminates I see in the operations log compute.instances.preempted directly followed by compute.instances.repair.recreateInstance with the note:
Instance Group Manager 'xxx' initiated recreateInstance on instance
'xxx'.
Reason: instance's intent is RUNNING but instance's status is
STOPPING.
After that follows a delete and a insert operation in order to restore the instance.
The documentation states:
You can simulate an instance preemption by stopping the instance.
In which case the IP address will stay attached when the VM is started again.
A) So my question, is it possible to have the instance group manager stop and start the VM in the event of preemption, instead of recreating? Since recreating means that the static IP will be detached and needs to be manually attached each time.
B) If option A is not possible, how can I attach the static IP address automatically so that I don't have to attach it manually when the VM is recreated? I'd rather not have an extra NAT VM instance to take care of this problem.
Thanks in advance!
I figured out a workaround to this (specifically, keeping a static IP address assigned to a preemptible VM instance between recreations), with the caveat that your managed instance group has the following properties:
Not autoscaling.
Max group size of 1 (i.e. there is only ever meant to be one VM in this group)
Autohealing is default (i.e. only recreates VMs after they are terminated).
The steps you need to follow are:
Reserve a static IP.
Create an instance template, configured as preemptible.
Create your managed group, assigning your template to the group.
Wait for the group to spin up your VM.
After the VM has spun up, assign the static IP that you reserved in step 1 to the VM.
Create a new instance template derived from the VM instance via gcloud (see https://cloud.google.com/compute/docs/instance-templates/create-instance-templates#gcloud_1).
View the newly create instance template in the Console, and note that you see your External IP assigned to the template.
Update the MiG (Managed Instance Group) to use the new template, created in step 6.
Perform a proactive rolling update on the MiG using the Replace method.
Confirm that your VM was recreated with the same name, the disks were preserved (or not, depending on how you configured the disks in your original template), and the VM has maintained its IP address.
Regards to step 6, my gcloud command looked like this:
gcloud compute instance-templates create vm-template-with-static-ip \
--source-instance=source-vm-id \
--source-instance-zone=us-east4-c
Almost goes without saying, this sort of setup is only useful if you want to:
Minimize your costs by using a single preemptible VM.
Not have to deal with the hassle of turning on a VM again after it's been preempted, ensuring as much uptime as possible.
If you don't mind turning the VM back on manually (and possibly not being aware it's been shutdown for who knows how long) after it has been preempted, then do yourself a favor and don't bother with the MiG and just standup the singular VM.
Answering your questions:
(A) It is not possible at the moment, and I am not sure if it will ever be possible. By design preemptible VMs are deleted to make space for normal VMs (if there are capacity constraints in the given zone) or regularly to differentiate them from normal VMs. In the latter case preemption might seem like a start/stop event, but in the former it may take a substantial amount of time before the VM is recreated.
(B) At the moment there is not good way to achieve it in generality.
If you have a special case where your group has only one instance you can hardcode the IP address in the Instance Template
Otherwise at the moment the only solution I can think of (other than using a Load Balancer) is to write a startup script that would attach the NAT IP.
I've found one way that ensures that all VM's in your network have the same outgoing IP address. Using Cloud NAT you can assign a static IP which all VM's will use, there is a downside though:
GCP forwards traffic using Cloud NAT only when there are no other
matching routes or paths for the traffic. Cloud NAT is not used in the
following cases, even if it is configured:
You configure an external IP on a VM's interface.
If you configure an external IP on a VM's interface, IP packets with the VM's internal IP as the source IP will use the VM's
external IP to reach the Internet. NAT will not be performed on
such packets. However, alias IP ranges assigned to the interface
can still use NAT because they cannot use the external IP to reach
the Internet. With this configuration, you can connect directly to
a GKE VM via SSH, and yet have the GKE pods/containers use Cloud
NAT to reach the Internet.
Note that making a VM accessible via a load balancer external IP does not prevent a VM from using NAT, as long as the VM network
interface itself does not have an external IP address.
Removing the VM's external IP also prevents you from direct SSH access to the VM, even SSH access from the gcloud console itself. The quote above shows an alternative with a load balancer, another way is a bastion, but doesn't directly solve access from for example Kubernetes/kubectl.
If that's no problem for you, this is the way to go.
One solution is to let the instances have dynamically chosen ephemeral IPs, but set the group as the target of a Load Balancer with a static IP. This way even when instances are created or destroyed, the LB acts as a frontend keeping the IP continious over time.