I have a cluster of VM (vmWare vCloude) with consul server installed on one of them.
communication between VM is done ONLY view internal network IP. using ExternalIP is blocked. so consul agents installed on other VM are getting the internal IP as advertised address.
I created a few Microservices using k8s that is installed on VM outside the cluster. I can communicate with the cluster ONLY with the ExternalIP.
problem:
consul is returning an advertised address from the VM and it can only be either, internal or external IP. if I choose the internalIP then I cannot use it from outside the cluster and if I use the externalIP, then all agents installed within the cluster will not be able to communicate. I did not find a why of configuring the advertised address with FQDN.
Did anyone faced this issue or have a solution for it?
Thank you,
Lior
I have set up Windows Machine in Azure in newly created VNET. After that I set up Virtual Network Gateway on this VNET. The gateway is in different subnet as instructed by Microsoft. I am able to connect to to this VPN from my desktop however I am unable to connect to VM.
VM's private IP is 10.0.0.4. It sits on "default" subnet with address range 10.0.0.0/24. VNET address range is defined as 10.0.0.0/24, and default gateway address range is 10.67.0.0/24.
What have I did wrong? Is there any chance to alter the setup or it requires building VNET from scratch and then VMS?
When you connect to your Azure VM from the desktop via a VPN connection, you could connect to VM with its private IP. If you have set GatewaySubnet address range 10.67.0.0/24, I guess that you should have address space in your VNet like this or you need to expand your address space.
To configure your VPN, you could refer to example values or this step-by-step blog. For windows VPN clients, you could select SSTP(SSL) or IKEv2 and SSTP(SSL) tunnel type.
When you modify your configuration on the Azure portal, you could re-download the VPN client package to re-connect the VPN connection. Let me know if you need further assistance in this case.
I am building a software that will consume a third-party API which limits my connection at 1200 requests per minute (from the same IP address).
Since this limitation is very low for me, I've been wondering if there's a way to have a set of IP addresses (let's say 100 addresses) and manage the queue so that once an IP exceeds the limitation, the next request will be made from a new one.
Google Cloud Platform (GCP)
Your will need two or more VPC networks. If you only have the default one, create an additional one. Each network adapter will connect to one VPC network.
Create a VM instance with two or more vCPUs. While creating the instance, attach a second network adapter connected to the second VPC network.
In your software, instead of creating a socket on 0.0.0.0, you create a socket for each network adapter private IP address. Ping pong back and forth between adapters spreading your traffic across your public IP addresses.
You can do the same type of configuration on AWS.
For our project we need a static IP binding to our Google Cloud VM instance due to IP whitelisting.
Since it's a managed group preemptible, the VM will terminate once in a while.
However, when it terminates I see in the operations log compute.instances.preempted directly followed by compute.instances.repair.recreateInstance with the note:
Instance Group Manager 'xxx' initiated recreateInstance on instance
'xxx'.
Reason: instance's intent is RUNNING but instance's status is
STOPPING.
After that follows a delete and a insert operation in order to restore the instance.
The documentation states:
You can simulate an instance preemption by stopping the instance.
In which case the IP address will stay attached when the VM is started again.
A) So my question, is it possible to have the instance group manager stop and start the VM in the event of preemption, instead of recreating? Since recreating means that the static IP will be detached and needs to be manually attached each time.
B) If option A is not possible, how can I attach the static IP address automatically so that I don't have to attach it manually when the VM is recreated? I'd rather not have an extra NAT VM instance to take care of this problem.
Thanks in advance!
I figured out a workaround to this (specifically, keeping a static IP address assigned to a preemptible VM instance between recreations), with the caveat that your managed instance group has the following properties:
Not autoscaling.
Max group size of 1 (i.e. there is only ever meant to be one VM in this group)
Autohealing is default (i.e. only recreates VMs after they are terminated).
The steps you need to follow are:
Reserve a static IP.
Create an instance template, configured as preemptible.
Create your managed group, assigning your template to the group.
Wait for the group to spin up your VM.
After the VM has spun up, assign the static IP that you reserved in step 1 to the VM.
Create a new instance template derived from the VM instance via gcloud (see https://cloud.google.com/compute/docs/instance-templates/create-instance-templates#gcloud_1).
View the newly create instance template in the Console, and note that you see your External IP assigned to the template.
Update the MiG (Managed Instance Group) to use the new template, created in step 6.
Perform a proactive rolling update on the MiG using the Replace method.
Confirm that your VM was recreated with the same name, the disks were preserved (or not, depending on how you configured the disks in your original template), and the VM has maintained its IP address.
Regards to step 6, my gcloud command looked like this:
gcloud compute instance-templates create vm-template-with-static-ip \
--source-instance=source-vm-id \
--source-instance-zone=us-east4-c
Almost goes without saying, this sort of setup is only useful if you want to:
Minimize your costs by using a single preemptible VM.
Not have to deal with the hassle of turning on a VM again after it's been preempted, ensuring as much uptime as possible.
If you don't mind turning the VM back on manually (and possibly not being aware it's been shutdown for who knows how long) after it has been preempted, then do yourself a favor and don't bother with the MiG and just standup the singular VM.
Answering your questions:
(A) It is not possible at the moment, and I am not sure if it will ever be possible. By design preemptible VMs are deleted to make space for normal VMs (if there are capacity constraints in the given zone) or regularly to differentiate them from normal VMs. In the latter case preemption might seem like a start/stop event, but in the former it may take a substantial amount of time before the VM is recreated.
(B) At the moment there is not good way to achieve it in generality.
If you have a special case where your group has only one instance you can hardcode the IP address in the Instance Template
Otherwise at the moment the only solution I can think of (other than using a Load Balancer) is to write a startup script that would attach the NAT IP.
I've found one way that ensures that all VM's in your network have the same outgoing IP address. Using Cloud NAT you can assign a static IP which all VM's will use, there is a downside though:
GCP forwards traffic using Cloud NAT only when there are no other
matching routes or paths for the traffic. Cloud NAT is not used in the
following cases, even if it is configured:
You configure an external IP on a VM's interface.
If you configure an external IP on a VM's interface, IP packets with the VM's internal IP as the source IP will use the VM's
external IP to reach the Internet. NAT will not be performed on
such packets. However, alias IP ranges assigned to the interface
can still use NAT because they cannot use the external IP to reach
the Internet. With this configuration, you can connect directly to
a GKE VM via SSH, and yet have the GKE pods/containers use Cloud
NAT to reach the Internet.
Note that making a VM accessible via a load balancer external IP does not prevent a VM from using NAT, as long as the VM network
interface itself does not have an external IP address.
Removing the VM's external IP also prevents you from direct SSH access to the VM, even SSH access from the gcloud console itself. The quote above shows an alternative with a load balancer, another way is a bastion, but doesn't directly solve access from for example Kubernetes/kubectl.
If that's no problem for you, this is the way to go.
One solution is to let the instances have dynamically chosen ephemeral IPs, but set the group as the target of a Load Balancer with a static IP. This way even when instances are created or destroyed, the LB acts as a frontend keeping the IP continious over time.
I have one CentOS instance in AWS and another instance in Hybris Cloud.
The AWS instance is running a Jenkins Server and I want to install a slave for it in the Hybris Cloud Instance.
I have followed the steps to establish SSH connection between two machine but still can't get them to connect.
What am I missing? Is there any special SSH configuration for establishing connection between different cloud providers?
I cant speak for Hybris, but AWS has a security group for your EC2 instance. The security group for your AWS instance must allow port 22 from the IP address of your Hybris server (or a range of IP addresses). In addition, the host firewall on the EC2 Jenkins server must allow for this as well.
Likewise, the Hybris server must have the same ports opened up.
If you continue having issues after checking security groups and host firewalls, check the Network ACL in AWS. If you are in your default VPC and there have been no alterations, the Network ACL should allow for your use case. However if you are in a non-default VPC, whoever created it may have adjusted the Network ACL.