I just want to know whether we can add an isolated internal network in the vcenter distribute switch?
One vm have a dhcp server.
Will it effect the public network?
Thanks.
Check out this VMware KB article. You're required to have the virtual machines on the same host for it to work. In such case, it might be easier just to create a standard vSwitch with no uplink adapters.
Related
I'm new to DPDK, I have a question on how to announce VIP through BGP for my LB based on DPDK.
The design looks like this:
I have multiple hosts (virtual machines or physical machines), every host advertising the same VIP, 10.0.0.10 for example.
Each LB host has two NICs, one for DPDK, one for admin purposes.
LB forward the packets to backend servers based on VIP configuration
Normally, I use Bird or Quagga for BGP advertising, but I don't know how to make BGP work with DPDK applications. Do I need to implement BGP and ARP within the DPDK application? It looks overkilling to me.
I searched online but seems there's little information about DPDK. I expect DPDK experts could provide me with some hints or point me to the related resources.
Update:
My question is: how to announce VIP with Quagga/Bird for my DPDK application (NIC2)? DPDK application and Quagga/Bird are running on the same machine.
After more investigation, I find one of the solution is using DPDK KNI to forward BGP traffic to Linux kernel, so Quagga/Bird could advertise VIP through KNI interface.
I'm wondering is it possible to announce VIP through NIC1 for NIC2 which is bound to DPDK application, so I don't need implement KNI in my DPDK application.
I have two machines within GCP. Both machines are on the same subnet.
The way I understood, GCP is build on SDN and so there is not traditional switching. In other words, there is no ARP recognition for my two machines to communicate directly to each other omiting default gateway.
Am I right? can you please shed some light onto it?
I am not sure what you mean by:
In other words, there is no ARP recognition for my two machines to
communicate directly to each other omiting default gateway.
ARP and RARP are supported. ARP lookups are handled in kernel software.
Once two systems have communicated together, the operating systems know about each other and the IP to MAC mapping. If the systems have not communicated previously, then MAC lookup takes place which is managed by the VPC network.
VPC networks use Linux's VIRTIO network module to model Ethernet card and router functionality, but higher levels of the networking stack, such as ARP lookups, are handled using standard networking software.
ARP lookup
The instance kernel issues ARP requests and the VPC network issues ARP replies. The mapping between MAC addresses and IP addresses is handled by the instance kernel.
MAC lookup table, IP lookup table, active connection table
These tables are hosted on the underlying VPC network and cannot be inspected or configured.
Advanced VPC concepts
We have a number of 3rd party systems which are not part of our AWS account and not under our control, each of these systems have an internal iis server set up with dns which is only available from the local computer. This iis server holds an API which we want to be able to utilise from our EC2 instances.
My idea is to set up some type of vpn connection between the ec2 instance and the 3rd party system so that the ec2 instance can use the same internal dns to call the api.
AWS provide direct connect, is the correct path go down in order to do this? If it is, can anyone provide any help on how to move forward, if its not, what is the correct route for this?
Basically we have a third party system, on this third party system is an IIS server running some software which contains an API. So from the local machine I can run http://<domain>/api/get and it returns a JSON lot of code. However in order to get on to the third party system, we are attached via a VPN on an individual laptop. We need our EC2 instance in AWS to be able to access this API, so need to connect to the third party via the same VPN connection. So I think I need within AWS a separate VPC.
The best answer depends on your budget, bandwidth and security requirements.
Direct Connect is excellent. This services provides a dedicated physical network connection from your point of presence to Amazon. Once Direct Connect is configured and running your will then configure a VPN (IPSEC) over this connection. Negative: long lead times to install the fibre and relatively expensive. Positives, high security and predicable network performance.
Probably for your situation, you will want to consider setting up a VPN over the public Internet. Depending on your requirements I would recommend installing Windows Server on both ends linked via a VPN. This will provide you with an easy to maintain system provided you have Windows networking skills available.
Another good option is OpenSwan installed on two Linux system. OpenSwan provides the VPN and routing between networks.
Setup times for Windows or Linux (OpenSwan) is easy. You could configure everything in a day or two.
Both Windows and OpenSwan support a hub architecture. One system in your VPC and one system in each of your data centers.
Depending on the routers installed in each data center, you may be able to use AWS Virtual Private Gateways. The routers are setup in each data center with connection information and then you connect the virtual private gateways to the routers. This is actually a very good setup if you have the correct hardware installed in your data centers (e.g. a router that Amazon supports, which is quite a few).
Note: You probably cannot use a VPN client as the client will not route two networks together, just a single system to a network.
You will probably need to setup a DNS Forwarder in your VPC to communicate back to your private DNS servers.
Maybe sshuttle can do, what you need. Technically you can open ssh tunnel between your EC2 and remote ssh host. It can also deal with resolving dns requests at remote side. That is not perfect solution, since typical VPN has fail over, but you can use it as starting point. Later, maybe as foll back, or for testing purposes.
Need help with setting up a home lab. I have onenter image description heree VMWARE ESXI server runnng vmware esxi 5.5. It has 2 physical NICs. I have one network that has access to the internet. I have another network with all my test lab servers. Can someone give steps on how to setup the second network to gain access to the Internet. I'm following Windows server base config guide that has you setup a 10.x.x.x.x network. My home network is a 192.x.x.x.x network. I have included a picture of what I'm trying to do. I understand the theory but do not know the steps.
This topic can be incredibly easy or incredibly complex.
As is, the easiest way to put the second network on the internet, you would need to connect your second uplink (NIC) to the second vSwitch (Named: vSwitch1). If those systems have the proper IP addresses and the proper routing config, internet access should work.
We have 3 ESXi servers that each have their public IP for manageability, however for the backups we need the servers to have an internal on a different NIC.
However, when we've added a new VMKernel network, the original (public IP) network won't connect anymore, resulting in the server being only reachable via the newly added LAN network.
Is there a solution we can use so the servers are reachable on both NICs/IPs ?
The 3 servers have these configuration for network:
Interface 1: Dell iDRAC
Interface 2: VMWare public management network (public)
Interface 3: VMWare private management network (10.0.0.1/24)
Interface 4-5: Double redundant uplink
Interface 6-7: LAN network trunked
You may use the same switch (with 2 uplinks and explicit LBFO settings for different port groups) or two different switches each using its own uplink - one for external and another for internal management network.
I think you can keep external management network setup as it is now (same vSwitch, same management port group, the same vmk0 adapter in default TCP/IP stack). This vmk0 adapter may have IP configuration like this:
IP: 192.168.5.5/24
GW: 192.168.5.1 - it may be defined for default TCP/IP stack or on vmk0 itself
For internal management network, just create another vSwitch, new management port group and new vmk1 adapter. Imagine you want to use internal management network like this:
IP: 10.5.5.5/24
GW: 10.5.5.1
Because we cannot have 2 gateways in default TCP/IP stack, you can define gateway directly on vmk1 (this is supported in ESXi 6.5):
esxcli network ip interface ipv4 set -g 10.5.5.1 -i vmk1 -t static -I 10.5.5.5 -N 255.255.255.0
Once you do this, I think both internal and external management networks should work for you. There may be some edge cases with routing where this scheme may not work, but I think for your use-case it should be fine.
In general there is not a problem with having two or more management interfaces. You should to give us some more information about network configuration. Did you change default gateway in host configuration? Remember that you may have only one default gateway and if you have changed it to correct for LAN then packets get by public interface not know how to return.
If this is the problem you should set default gateway properly for public interface. But you also need to connect from LAN. If machines in LAN are in the same network segment - it should just work. If machines are in other LAN - add entry to routing table, like described here: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2001426