I would like to setup a dpdk env then i can do packet generation and packet capture in one VM. Is this even possible ? If it is, may i ask what will the (port layout etc) setup look like in details ?
I have tried creating 2 dpdk compatible ports in one VM, and did pktgen on one port, doing packet capturing on the other. But it doesn't work. Note, while doing pktgen, i already specify the dst mac address is the mac addr of the other port which the packet capturing app is sniffing.
It seems I either wire these 2 ports together physically or create a loopback for these 2 ports which i didn't know how.
Thanks !
It should be possible. Please note that "it doesn't work" doesn't quite describe your problem, so i'll have to go off my assumptions here.
Two instances of DPDK (e.g. pktgen and, say, l3fwd) should be able to coexist on a single VM without any problems, provided that you run both with different prefixes, and use the PCI white/blacklist to ensure that no port is used in more than one instance of DPDK.
So, assuming you have your ports at 08:00.0 and 09:00.0, the following might be the command-line:
./dpdk_app1 -w 08:00.0 --file-prefix=app1 # use only 08:00.0, use prefix app1
./dpdk_app2 -w 09:00.0 --file-prefix=app2 # use only 09:00.0, use prefix app2
If you're not using fairly recent (18.05+) versions of DPDK, you will also have to limit amount of memory each application will use, as by default older versions of DPDK will take over your entire hugepage memory. This is not an issue for DPDK versions 18.05+ so if you're using that, you can disregard this paragraph.
Now, to your question of logistics of how to run two ports - this is left up to you. You can connect two ports back-to-back if you are using physical NICs (either using PCI pass-through, or using Virtual Functions). This is (IMO) the easiest way, however bear in mind that Virtual Functions's port MAC address needs to match the one defined by the host - otherwise the traffic will not be steered to/from your Virtual Functions.
I have never tried this, but it is a reasonable assumption to make that sending traffic VF to VF directly should also work, provided you have set up your MAC addresses correctly. There are references to a DTS test[1] which does exactly that (only using two VM's instead of one, which i don't think would make any difference), so it should be possible.
You can also use entirely virtual ports, and use one of our software drivers (e.g. tun[2] or pcap drivers[3]) - it won't be performant, but it'll do the job.
[1] https://doc.dpdk.org/dts/test_plans/vf_to_vf_nic_bridge_test_plan.html
[2] https://doc.dpdk.org/guides/nics/tap.html
[3] https://doc.dpdk.org/guides/nics/pcap_ring.html
Related
I am currently working on the design of a measurement system. It has a three instruments mounted on a pan/tilt head, but only one serial line from the instruments to the Beaglebone Black (BBB) that controls everything. Instrument A and B are similar (use the same commands and module). I'm using Python to control everything. During testing, I had additional cables so that I could wire each instrument to a separate port on the BBB, but that is not possible in the final setup.
Since I needed some processing capabilities on top of the pan/tilt head anyway, I'm using a PIC24 device to connect all instrument serial connections to.
My idea is to multiplex the 3 serial connections, for instance by adding a prefix A_/B_/C_ to the commands/replies.
This I can do.
Communications and processing for instrument A and B is done by the same Python module, which has a function measure() that takes the serial port (ie. /dev/ttyO4) as one of the parameters. I'll obviously need to adapt this.
I need to find a way to allow different modules to access three "virtual" ports, with the choice of either stream A/B/C.
So in short: I (think I) need some kind of class/... that opens the serial port and multiplexes/demultiplexes three streams. Instrument A and B are not to be used simultaneously, but A/C and B/C can be used at the same time. Timing is not critical, a couple of hundreds of milliseconds delay is not an issue.
One option would be to use a second PIC to do the reverse of the microcontroller near the instruments, but I suppose this should be possible in Python as well...
I think the elegant solution is to add some code for your PIC to work as a Modbus slave.
There seem to be good efforts already done, so maybe you can use something like this as a starting point.
You can have the three UARTs connected to the sensors continuously writing to several Modbus registers and query those from your BBB with something like pymodbus or pylibmodbus.
It will also be possible to use other buses/protocols like CAN, but if you run Modbus directly on the TTL UART (instead of over RS485, which you won't need unless you have long distance or a noisy environment) you don't need any additional hardware. You will have to modify the firmware on your PIC and write some more lines of Python on your BBB.
But if you want to learn something new (assuming you don't know already), Modbus is quite an easy and useful protocol to add to your toolbox. It's still very popular and open (the spec is publicly available and you have tons of info and code).
EDIT: I'm keeping my first answer as a reference for others, but the question did not refer to sharing the same physical cable for multiple ports so what I wrote here is not really useful somebody misunderstands it the same way I did.
This question has come up a number of times, see for instance Receiving data from multiple devices using parallel wired RS232
Serial lines are not intended to be multiplexed, if you decide to follow this route I think you'll get many headaches.
Is there a reason not to use a multipoint protocol like RS485, SPI...? I'm sure you'll be able to find one that works for your needs. For RS485, for instance, the
investment in new hardware would be minimal and the software side would be a piece of cake.
I'm writing a program that uses winpcap to capture some specific network traffic that is sent out by our switches.
However, wireless devices will never receive those packet so I'm trying to figure out how to determine if a network adapter is wireless or wired (so that I can then skip capturing on the wireless adapters altogether).
My first thought is to check the medium of the interface chosen (currently chosen based on the IP address of that adapter - the logic is that if it has an IP address, it is connected). The problem is, is that pcap_datalink() will return DLT_EN10MB, whether its wired or wireless.
The next thought was to try pcap_can_set_rfmon(), which should tell me if the device cannot be set to monitoring mode (and therefore if it is or isn't wired). However, I seem to get a 2019 linking error when I try to use this, which seems to be supposedly to do with the function not being supported on Windows without Airpcap?
I don't really see what else to try but it would be great if someone had any pointers. I'm wondering how difficult and convoluted it would end up becoming if I had to start using NDIS to determine what each adapter on a system is and then match that up to the device names used by WinPCap.. surely this is something I could keep in-house with lib/WinPCap?
Thanks!
I have a solution of sorts, just for Windows systems.
For an adapter that I want to select, based on the network it is connected to, I can compare the IP address associated with that adapter with each of the IP addresses in objects generated by GetAdaptersInfo. If they match, then I can see whether or not the "Type" on that same object is ethernet.
if ((pAdapterInfo->Type == MIB_IF_TYPE_ETHERNET) && (WINVER > _WIN32_WINNT_WS03))
{
}
I also check the Windows version; since it is only from Vista (Winver 6+) onwards that IF_TYPE_IEEE80211 is returned in the adapter is wireless.
It doesn't use WinPCap, but then again I'm not sure its possible to. Since I already am using these Windows libraries elsewhere, I figured that this is a platform-specific compromise I'll make. Hopefully that helps someone else one day!
probably a stupid question (sorry!) but within VMWare ESXi, is there any way I can share memory across VMs on the same blade such that two VMs can perform interprocess communication through the shared memory block rather than using messaging? I know I can share memory across VMs but it's the interprocess communication I'm interested in. The aim would be that the two VMs can access an in memory db really quickly but (unlike a hosted OS solution) if a VM went down, the other VM could still merrily keep going.
According to the vSockets API documentation, it says:
"The original VMCI library was released as an experimental C language interface with Workstation 6.0. VMCI included a datagram API and a shared memory API. Both interfaces were discontinued in Workstation 6.5."
So it seems now you can only use the vmci SOCK_DGRAM or SOCK_STREAM sockets. These behave much as tcp/ip sockets, and there is plenty of information on the net about how to write code that uses them, you basically just need the header vmci_sockets.h and off you go.
You will find the guest side of vsocket communication straightforward, however you need to write some kind of server to run on the ESXi host to host your DB, and this is the tricky part. If you enable ssh on ESXi and have a sniff around, you will find a remarkably unix-like system with /dev/vsock device which you can query to get the vsocket protocol family (it may be different for your guests, you need to call the ioctl). Unfortunately (and I tried this) simply compiling up a 64-bit statically linked Linux binary will get you nothing more than a segfault as the ESXi OS isn't sufficiently similar to Linux.
There is an SDK that you can try to use to create programs, however it's not for the faint hearted, some people have spent literally days just trying to get it to compile so you can try this route if you want, but I don't recommend it. Google for "VMWare ESXi toolchain".
However, if you are looking for a faster route to getting your own server running on ESXi, and are prepared to use Python you'll find both 5.5 and 6.0 come with a fairly up-to-date Python (Python 2.7.9 on ESXi v6.0). VMWare unhelpfully removed the .py library files leaving just .pyc files so you can't easily see if this differs from the standard version, but it's certainly functional enough to get a basic socket server running.
The Python sockets module doesn't understand the vmci socket family, so you will need to use ctypes to bypass it calling the c library socket functions directly. You can create the socket using pure python, even call listen() but e.g. bind() implicitly expects you to be dealing with sockaddr_in structures.
So you will need to inspect the vmci_sockets.h header, and come up with Python ctypes structures and functions to mirror the ones that you'd use from C, e.g.:
class sockaddr_vm(Structure):
_fields_ = [("svm_family", c_ushort),
("svm_reserved1", c_ushort),
("svm_port", c_uint),
("svm_cid", c_uint),
("svm_zero", c_uint),
]
This can then be passed into the libc bind/recvfrom/sendto calls which you can access with:
libc = cdll.LoadLibrary("libc.so.6")
And you can then bind to your socket bypassing Pythons socket module, i.e.
addr = sockaddr_vm()
addr.svm_family = af # address family obtained from ioctl
addr.svm_reserved1 = 0
addr.svm_cid = VMADDR_CID_ANY # 0xffffffff
addr.svm_port = port
addr.svm_zero = 0
# s is the socket, created with python socket.socket()
out = libc.bind(s.fileno(), pointer(addr), sizeof(addr))
One caveat of Python on ESXi is that you seem to be limited in the memory you can play with: I tried allocating a 16MB buffer and Python refused to give it to me with memory error, however this wasn't a problem for what I was doing at the time so I didn't bother trying to find a way around it. I suspect there's a setting that can fix that.
Getting your script started at ESXi boot time is left as an exercise for the reader but this should get you started. Have fun!
VMCI is what you would require. Unfortunately, VMCI is deprecated between VMs from vSphere 6 and above.
My development target is a Linux computer that has two physical serial ports located at /dev/ttyS0 and /dev/ttyS1. I also expect /dev/ttyS2 and /dev/ttyS3 to be defined.
Using stty -f /dev/ttyS0 and S1 reports the configuration of the two serial ports and reports something menaing "doesn't exist" for S2 and S3.
The hardware designers are talking about offering USB to Serial ports built onto the main board. They'll be DB9 connectors on the outside and just circuitry - no USB connectors on the inside. The number of USB-to-serial connections is not guaranteed and I know enough to design for "many" instead of one.
So, in setting up my port server daemon, I need to be able to determine which ttyS's and which ttyUSB's are "real" and which aren't. Will there ever be placeholdeer ttyUSB's? What if one were to be "unplugged" (say it was, indeed, a real USB coupler on the inside of the PC)?
Is there a better approach than popen()ing stty and examining its output to determine the status of the serial ports? Is there a C API for stty?
Thanks!
The "C-API" which stty uses is tcsetattr(3) and tcgetattr(3).
For finding TTYs without opening the device you may look at this question:
How to find all serial devices (ttyS, ttyUSB, ..) on Linux without opening them?
I have a network application that I need to convert so that it works for ipv6 network. Could you please let me know what I need to do (replace socket APIs)?
One more thing, how can I test my application?
Thanks.
The core socket system calls are protocol neutral. You will need to use AF_INET6 instead of the standard AF_INET address family, as well as PF_INET6, sockaddr_in6 and others when appropriate.
I'd suggest having a read through the "ipv6" man page or the "socket interface extensions for ipv6" RFC: http://www.ietf.org/rfc/rfc3493.txt
Similar and possibly relevant question: is ipv6 backward compatable with ipv4?
3rd edition of "Unix Network Programming" has numerous examples and a whole chapter in IPv4/IPv6 interoperability.
For testing, you can create a bunch of virtual machines with Microsoft Virtual PC (or similar) and test the app between them - you can easily put them on a private network where they can only see each other.
Take a look at http://gsyc.escet.urjc.es/~eva/IPv6-web/ipv6.html - it is a rather comprehensive resource, and has some useful references to RFCs.
For the testing considerations, if your application will be dualstack, consider the following failure scenario: the IPv6 traffic may blackholed for various reasons, an example scenario being the user who uses 6to4 anycast tunneling but their traffic to/from 192.99.88.1 (anycast 6to4 relay address) is dropped. Try to test this case and ensure the application falls back gracefully without destroying the user experience, this will save a few support calls later.
(NB: I am talking specifically about blackholing because in the case of the "normal" errors like a routing problem the error will usually be returned rather fast. So you might consider putting inbetween the hosts some kind of router that you could configure to silently drop the packets)