Hashicorp consul FQDN - vmware

I have a cluster of VM (vmWare vCloude) with consul server installed on one of them.
communication between VM is done ONLY view internal network IP. using ExternalIP is blocked. so consul agents installed on other VM are getting the internal IP as advertised address.
I created a few Microservices using k8s that is installed on VM outside the cluster. I can communicate with the cluster ONLY with the ExternalIP.
problem:
consul is returning an advertised address from the VM and it can only be either, internal or external IP. if I choose the internalIP then I cannot use it from outside the cluster and if I use the externalIP, then all agents installed within the cluster will not be able to communicate. I did not find a why of configuring the advertised address with FQDN.
Did anyone faced this issue or have a solution for it?
Thank you,
Lior

Related

Issues accessing on-premise network via VPN from GKE pods

I've created
1 GKE autopilot cluster
1 classic VPN Tunnel to on-premise network
Network Connectivity tests suggest, that my VPN is working (I suppose)
Result of Connectivity Test
When trying to traceroute from a pod however:
traceroute from pod
Pod IP is 10.4.0.85
I have a host project (network) which also contains VPN Tunnel and routing. This network is shared with another project, which contains the GKE autopilot cluster.
VPN tunnel show to be working from both ends. GKE Nodes are pingable from on-premise network.
Since it is a autopilot cluster, I cannot confirm, whether connection from nodes to on-premise is working.
I expected my traceroute to show successful connection to on premise IP, or at least to VPN Endpoint of on-premise network.
It only shows one hop to 10.4.0.65 (this is in the CIDR of my GKE Cluster, but I do not know where it belongs to)
I've taken a look at the IP masquerading as described here without success.
And now I am lost. I suppose, my packages (from traceroute) are not even leaving the GKE cluster. But why that is, I cannot tell.
Id be grateful to get a hint in the right direction.

GCP: How to access a L2 VM (qemu) running in a pod in gcp by IP from internet?

i have a cluster of 2 nodes created in gcp. the worker node (L1 VM) has nested virtualization enabled. i have created a pod in this L1 VM. and i have launched a L2 VM using qemu in this pod.
my objective is to access this L2 VM only by a IP address from external word (internet). there are many services running in my VM (L2 VM) and i need to access it only by IP.
i created a tunnel from node to L2 VM (which is within pod) to get dhcp address to my VM. but it seems dhcp offer and ack messages are blocked by google cloud.
i have got a public IP in the cluster through which i can reach to private IP of node. most probably there is a NAT configured in the cloud for the node's private IP.
can i configure node as a NAT gw so that i can push this packet further from internet to L2 VM?
any other suggestions are welcome!
I think you are trying to implement something like a bastion host. However, this is something that you shouldn't do with kubernetes. Although you 'can' implement it with kubernetes, it is simply not made for it.
I see there two vivid options for you:
A. Create another virtual machine (GCE instance) inside the same VPC as the cluster and set it up as a bastion host or as an endpoint for a VPN.
B. You can use the identity aware proxy (IAP) to tunnel the traffic to your machine inside the VPC as described here
The IAP is probably the best solution for your usecase.
Also consider using simple GCE instances as opposed to a kubernetes cluster. A kubernetes cluster is very useful if you have a lot of workload that is too much for one node or if you need to scale out and in etc. Your usecase looks to me more that you still think in the traditional server world and less about cattle vs pets.

Can not connect to other VM on the same network from kubernetes pod

I are currently running a Kubernetes cluster on GCP. The cluster has several pods. And I created a new VM in the same network. From Kubernetes pod can ping to the VM but can not connect via internal IP of VM. Please help me find solution for this issue. Thanks
I found solution for this issue. Create a firewall on GCP for VM to allow source from pod IP as 10.0.0.0/8

cannot connect to Redis Instance in GCP

I created an instance on GCP, but I am not able to access it.
This is similar to this one, but the proposed solution isn't working for me:
Unable to telnet to GCP MemoryStore
I have tried to telnet to it, I am in the same project and region, but apparently I need to be in the same network as it's a private ip, but what if you want to connect using the cloud shell? Also, how would an application running on my local machine access it?
I also included a firewall rule to make sure incoming connections are allowed.
To connect a client to a Cloud Memorystore for Redis instance, the client and the instance must be located in the same region, in same project and in the same VPC network. Please check the “Networking” document where you’ll have information on Basic network settings, limited and unsupported networks, network peering, IP address range.
You can connect to Redis from different GCP products like Compute Engine VM, Google Kubernetes Engine Cluster or Google Kubernetes Engine pod, but you can’t connect directly from the Cloud shell or from your local machine since they are not in your VPC network.
It may also have to do with a missing peering connection to your network. Check in your console at https://console.cloud.google.com/networking/peering/ to see if the peering is set up properly.
Using terraform you can use the following docs: https://www.terraform.io/docs/providers/google/r/redis_instance.html

Unable to connect to azure vm with internal IP

I have two vnets that are connected using a gateway. VnET1 and VNET2. VNET2 has a VM which hosts a mongodb instance. I have a webjob running within an App service environment which is deployed into a subnet within VNET1. From this subnet i am able to access the VM in VNET2 with its DNS. But i am unable to access the VM's internal IP. Any suggestions are welcome.
An internal IP address is internal to a VNET, and VNETs are isolated from one another by design. See this site for a good overview.. https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-overview/. If you want to connect internally you might want to consider having multiple subnets within the same VNET instead.
At present, connecting two vnets using a gateway allows IP communication but doesn't allow DNS name resolution. In this scenario we recommend managing a local DNS server. This page shows the requirements for using your own DNS server in Azure.
Hth, Gareth