How to announce VIP with BGP for DPDK application? - dpdk

I'm new to DPDK, I have a question on how to announce VIP through BGP for my LB based on DPDK.
The design looks like this:
I have multiple hosts (virtual machines or physical machines), every host advertising the same VIP, 10.0.0.10 for example.
Each LB host has two NICs, one for DPDK, one for admin purposes.
LB forward the packets to backend servers based on VIP configuration
Normally, I use Bird or Quagga for BGP advertising, but I don't know how to make BGP work with DPDK applications. Do I need to implement BGP and ARP within the DPDK application? It looks overkilling to me.
I searched online but seems there's little information about DPDK. I expect DPDK experts could provide me with some hints or point me to the related resources.
Update:
My question is: how to announce VIP with Quagga/Bird for my DPDK application (NIC2)? DPDK application and Quagga/Bird are running on the same machine.
After more investigation, I find one of the solution is using DPDK KNI to forward BGP traffic to Linux kernel, so Quagga/Bird could advertise VIP through KNI interface.
I'm wondering is it possible to announce VIP through NIC1 for NIC2 which is bound to DPDK application, so I don't need implement KNI in my DPDK application.

Related

ARP protocol in GCP for two VMs to communicate directly

I have two machines within GCP. Both machines are on the same subnet.
The way I understood, GCP is build on SDN and so there is not traditional switching. In other words, there is no ARP recognition for my two machines to communicate directly to each other omiting default gateway.
Am I right? can you please shed some light onto it?
I am not sure what you mean by:
In other words, there is no ARP recognition for my two machines to
communicate directly to each other omiting default gateway.
ARP and RARP are supported. ARP lookups are handled in kernel software.
Once two systems have communicated together, the operating systems know about each other and the IP to MAC mapping. If the systems have not communicated previously, then MAC lookup takes place which is managed by the VPC network.
VPC networks use Linux's VIRTIO network module to model Ethernet card and router functionality, but higher levels of the networking stack, such as ARP lookups, are handled using standard networking software.
ARP lookup
The instance kernel issues ARP requests and the VPC network issues ARP replies. The mapping between MAC addresses and IP addresses is handled by the instance kernel.
MAC lookup table, IP lookup table, active connection table
These tables are hosted on the underlying VPC network and cannot be inspected or configured.
Advanced VPC concepts

Can not communicate between subnets in the same virtual network

Not sure what is exactly happening since it was always working before but VMs on different subnets within the same virtual network with no NSGs or firewalls between them can not talk to each other. Ping is failing as well as any other sort of communication. Firewalls are disabled on both sides. All machines have access to internet. Communication was tried using IP addresses and not names. Both ping as well as TCP based tests were used.
Effective route for app01 for example is below
By default, Azure allows communicate between subnets in a same VNet.
Your issue seems a issue on Azure side, I suggest you could open a ticket on Azure Portal.

New to VMWARE Networking

Need help with setting up a home lab. I have onenter image description heree VMWARE ESXI server runnng vmware esxi 5.5. It has 2 physical NICs. I have one network that has access to the internet. I have another network with all my test lab servers. Can someone give steps on how to setup the second network to gain access to the Internet. I'm following Windows server base config guide that has you setup a 10.x.x.x.x network. My home network is a 192.x.x.x.x network. I have included a picture of what I'm trying to do. I understand the theory but do not know the steps.
This topic can be incredibly easy or incredibly complex.
As is, the easiest way to put the second network on the internet, you would need to connect your second uplink (NIC) to the second vSwitch (Named: vSwitch1). If those systems have the proper IP addresses and the proper routing config, internet access should work.

Second management interface/NIC/IP for ESXi 6.5

We have 3 ESXi servers that each have their public IP for manageability, however for the backups we need the servers to have an internal on a different NIC.
However, when we've added a new VMKernel network, the original (public IP) network won't connect anymore, resulting in the server being only reachable via the newly added LAN network.
Is there a solution we can use so the servers are reachable on both NICs/IPs ?
The 3 servers have these configuration for network:
Interface 1: Dell iDRAC
Interface 2: VMWare public management network (public)
Interface 3: VMWare private management network (10.0.0.1/24)
Interface 4-5: Double redundant uplink
Interface 6-7: LAN network trunked
You may use the same switch (with 2 uplinks and explicit LBFO settings for different port groups) or two different switches each using its own uplink - one for external and another for internal management network.
I think you can keep external management network setup as it is now (same vSwitch, same management port group, the same vmk0 adapter in default TCP/IP stack). This vmk0 adapter may have IP configuration like this:
IP: 192.168.5.5/24
GW: 192.168.5.1 - it may be defined for default TCP/IP stack or on vmk0 itself
For internal management network, just create another vSwitch, new management port group and new vmk1 adapter. Imagine you want to use internal management network like this:
IP: 10.5.5.5/24
GW: 10.5.5.1
Because we cannot have 2 gateways in default TCP/IP stack, you can define gateway directly on vmk1 (this is supported in ESXi 6.5):
esxcli network ip interface ipv4 set -g 10.5.5.1 -i vmk1 -t static -I 10.5.5.5 -N 255.255.255.0
Once you do this, I think both internal and external management networks should work for you. There may be some edge cases with routing where this scheme may not work, but I think for your use-case it should be fine.
In general there is not a problem with having two or more management interfaces. You should to give us some more information about network configuration. Did you change default gateway in host configuration? Remember that you may have only one default gateway and if you have changed it to correct for LAN then packets get by public interface not know how to return.
If this is the problem you should set default gateway properly for public interface. But you also need to connect from LAN. If machines in LAN are in the same network segment - it should just work. If machines are in other LAN - add entry to routing table, like described here: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2001426

can create an isolated internal network in the vcenter distribute switch?

I just want to know whether we can add an isolated internal network in the vcenter distribute switch?
One vm have a dhcp server.
Will it effect the public network?
Thanks.
Check out this VMware KB article. You're required to have the virtual machines on the same host for it to work. In such case, it might be easier just to create a standard vSwitch with no uplink adapters.