I have two machines within GCP. Both machines are on the same subnet.
The way I understood, GCP is build on SDN and so there is not traditional switching. In other words, there is no ARP recognition for my two machines to communicate directly to each other omiting default gateway.
Am I right? can you please shed some light onto it?
I am not sure what you mean by:
In other words, there is no ARP recognition for my two machines to
communicate directly to each other omiting default gateway.
ARP and RARP are supported. ARP lookups are handled in kernel software.
Once two systems have communicated together, the operating systems know about each other and the IP to MAC mapping. If the systems have not communicated previously, then MAC lookup takes place which is managed by the VPC network.
VPC networks use Linux's VIRTIO network module to model Ethernet card and router functionality, but higher levels of the networking stack, such as ARP lookups, are handled using standard networking software.
ARP lookup
The instance kernel issues ARP requests and the VPC network issues ARP replies. The mapping between MAC addresses and IP addresses is handled by the instance kernel.
MAC lookup table, IP lookup table, active connection table
These tables are hosted on the underlying VPC network and cannot be inspected or configured.
Advanced VPC concepts
Related
I'm new to DPDK, I have a question on how to announce VIP through BGP for my LB based on DPDK.
The design looks like this:
I have multiple hosts (virtual machines or physical machines), every host advertising the same VIP, 10.0.0.10 for example.
Each LB host has two NICs, one for DPDK, one for admin purposes.
LB forward the packets to backend servers based on VIP configuration
Normally, I use Bird or Quagga for BGP advertising, but I don't know how to make BGP work with DPDK applications. Do I need to implement BGP and ARP within the DPDK application? It looks overkilling to me.
I searched online but seems there's little information about DPDK. I expect DPDK experts could provide me with some hints or point me to the related resources.
Update:
My question is: how to announce VIP with Quagga/Bird for my DPDK application (NIC2)? DPDK application and Quagga/Bird are running on the same machine.
After more investigation, I find one of the solution is using DPDK KNI to forward BGP traffic to Linux kernel, so Quagga/Bird could advertise VIP through KNI interface.
I'm wondering is it possible to announce VIP through NIC1 for NIC2 which is bound to DPDK application, so I don't need implement KNI in my DPDK application.
I'd like to create the equivalent of an AWS VPC or Openstack Network in VirtualBox, such that an IP address is capable of using IPVS to load balance between machines.
This is a way, for me, to simulate more complex deployments of my distributed app in a real world SDN scenario.
workarounds
Since this questions lacks context, here are a few workarounds that may be suitable to my use case, which also shed light on the end requirement:
1) A lightweight VM using HAProxy
2) A lightweight VM that sets up IP tables rules for routing randomly (by %) to n distinct downstream addresses.
Not sure what is exactly happening since it was always working before but VMs on different subnets within the same virtual network with no NSGs or firewalls between them can not talk to each other. Ping is failing as well as any other sort of communication. Firewalls are disabled on both sides. All machines have access to internet. Communication was tried using IP addresses and not names. Both ping as well as TCP based tests were used.
Effective route for app01 for example is below
By default, Azure allows communicate between subnets in a same VNet.
Your issue seems a issue on Azure side, I suggest you could open a ticket on Azure Portal.
We have a number of 3rd party systems which are not part of our AWS account and not under our control, each of these systems have an internal iis server set up with dns which is only available from the local computer. This iis server holds an API which we want to be able to utilise from our EC2 instances.
My idea is to set up some type of vpn connection between the ec2 instance and the 3rd party system so that the ec2 instance can use the same internal dns to call the api.
AWS provide direct connect, is the correct path go down in order to do this? If it is, can anyone provide any help on how to move forward, if its not, what is the correct route for this?
Basically we have a third party system, on this third party system is an IIS server running some software which contains an API. So from the local machine I can run http://<domain>/api/get and it returns a JSON lot of code. However in order to get on to the third party system, we are attached via a VPN on an individual laptop. We need our EC2 instance in AWS to be able to access this API, so need to connect to the third party via the same VPN connection. So I think I need within AWS a separate VPC.
The best answer depends on your budget, bandwidth and security requirements.
Direct Connect is excellent. This services provides a dedicated physical network connection from your point of presence to Amazon. Once Direct Connect is configured and running your will then configure a VPN (IPSEC) over this connection. Negative: long lead times to install the fibre and relatively expensive. Positives, high security and predicable network performance.
Probably for your situation, you will want to consider setting up a VPN over the public Internet. Depending on your requirements I would recommend installing Windows Server on both ends linked via a VPN. This will provide you with an easy to maintain system provided you have Windows networking skills available.
Another good option is OpenSwan installed on two Linux system. OpenSwan provides the VPN and routing between networks.
Setup times for Windows or Linux (OpenSwan) is easy. You could configure everything in a day or two.
Both Windows and OpenSwan support a hub architecture. One system in your VPC and one system in each of your data centers.
Depending on the routers installed in each data center, you may be able to use AWS Virtual Private Gateways. The routers are setup in each data center with connection information and then you connect the virtual private gateways to the routers. This is actually a very good setup if you have the correct hardware installed in your data centers (e.g. a router that Amazon supports, which is quite a few).
Note: You probably cannot use a VPN client as the client will not route two networks together, just a single system to a network.
You will probably need to setup a DNS Forwarder in your VPC to communicate back to your private DNS servers.
Maybe sshuttle can do, what you need. Technically you can open ssh tunnel between your EC2 and remote ssh host. It can also deal with resolving dns requests at remote side. That is not perfect solution, since typical VPN has fail over, but you can use it as starting point. Later, maybe as foll back, or for testing purposes.
https://docs.marklogic.com/guide/database-replication/configuring
I was reading the documentation on Data Replication, the section on security, and it references XDQP, but searching the documents and developer.marklogic.com, I was not able to find anything that describes what XDQP means. Can someone please clarify and point me to documentation with more information?
XDQP is the protocol MarkLogic nodes use to talk to each other.
The name is an acronym for XML Data Query Protocol, if I remember right, but it's evolved to be more than that.
It's an undocumented internal-only protocol.
The most relevant points needed to be considered
Its a TCP/IP based (but not HTTP) protocol and runs on port 7999 by default (changeable)
Multiple sockets are opened to each host for redundancy
All hosts in a cluster need to be able to communicate on that port at all times to all other hosts. The 'hostnames' of each host must resolve independently to an IP address it can be reached by all other hosts in the cluster. (not necessarily the same IP as a client connection)
Therefore any firewall, iptables, routers, network security etc needs to be configured to allow bi directional TCP/IP initiated and received by all hosts in a cluster to all other hosts at a TCP/IP level (not HTTP) without port rewriting or content based filtering/routing.
There is a continual 'heartbeat' synchronizing all servers to the same clock (transaction timestamp) and keeping a consistent state of the 'quorum' and propagating configuration change. If this is interrupted a host will become disconnected from the cluster. If that host has critical data the cluster may stop being fully functional.
Monitoring the traffic patterns (not content) can sometimes be useful in debugging or predicting performance issues or unusual behaviour
Any 'dead periods' between any 2 hosts on this port is an indication of some kind problem, conversely any interruption of networking availability on this port will cause the cluster to rejoin and determine if the subset of hosts accessible by any one host is sufficient to be a 'quorum' (the 'live' part of the cluster) or if the host(s) are the non-active part of a disjoint cluster.