How does EtherCAT support different network topologies? - ethercat

How does EtherCAT support different network topologies?
Assume a pure EtherCAT network without any standard ethernet switches, hubs, etc... to complicate things, and with one master and multiple slaves.
Some sources describe it as only supporting ring topologies (i.e. Wikipedia), and this makes sense given the theory of operation, but the EtherCAT website says it supports other topologies as well.
100BaseTX ethernet cables contains two half-duplex links, one in each direction; is it true that when viewed as a graph of half-duplex links, EtherCAT is always a ring bus, but when viewed as a graph of physical ethernet cables, the graph can be almost arbitrary?

That's right.
When viewed physically, there can be lots of topologies: dasiy chain, star, tree, etc. For example, you can use Beckhoff EK1122 module to create a three-branch star topology. Logically, there is a single determined path around all the nodes(master and slaves) that EtherCAT frames go through. That forms a ring because the master is the source that initiates all frames and is also the final destination that all frames will go back to.

An EtherCAT "loop" is a connected set of slave devices, which can each connect to at most four neighboring devices. These four possible connections are called ports and are numbered 0-3. Port 0 is the "upstream" connection, which I usually describe as connecting to the slave's parent device, port 1 is usually whatever the "straight through" path would be.
If you take a bus coupler (EK1100) for example, it has:
port 0: RJ45 socket (for Ethernet 8P8C connector) labelled "X1 In"
port 1: EBUS-Out (for EBUS slice connections)
port 2: RJ45 socket labelled "X2 Out"
For comparison an EBUS junction has:
port 0: EBUS-In (for connection to upstream EBUS slices)
port 1: EBUS-Out
port 2: RJ45 socket labelled "X1"
port 3: RJ45 socket labelled "X2"
And a bus extension (EK1110) has:
port 0: EBUS-In
port 1: RJ45 socket labelled "X1 Out"
These connections form a graph where every slave is a node having exactly one parent and at most three children. Each edge in the graph represents a bidirectional ethernet connection between two ports. Once you have built up this connected graph of slaves the auto-increment numbering scheme results from a depth first traversal of the tree, numbering each new slave with the next free number. Sub-graphs are explored along port 1, port 3, then port 2 (no clue why it's that order).
So, yes, each half duplex link is traversed only once during a packet transmission through the network, meaning that it can be viewed as a ring of half duplex links, with each slave-to-slave connection appearing on the ring in two places (once for each direction of traversal).

(Some additional info)
If you have a look for the way EtherCAT masters addresses their slaves, you will see that even if you have a daisy chain topology, the telegram transport behaves like a line topology. It is because the Master counts all the slaves which are present at the bus and assigns an auto-incremental address (in the first phase) to them. Thats the order how the telegram will be processed by the slaves. So the Master passes the telegram to slave1, it puts on the fly its data into its section and passes to slave2 and so on. The last slave closes the bus and sends the telegram back. In the user manual they use sometimes the word "shortcutted".
So physically you can have nearly every topologie you want, but logically you have a line. If you´d like to have redundancy, you could connect the last slave with a second EtherCAT port at the master. This would give you a real ring topology and the bus would still work in the case that a slave goes down (excluded the defect slave).

As Eric Z has answered above, it may be a physical line, ring, star or tree. And he says, that the packet will go through a logical ring. But he did not say how this is achieved, see my comments on his answer. Therefore I digged a little deeper and found this article:
http://digital.ni.com/public.nsf/allkb/3399C1A0211EDC14862580140065286B
which describe that for a "dedicated EtherCAT junction" is needed to build a star (or a tree):
Star:
This is the most familiar topology for many new to EtherCAT®, as it resembles are regular Ethernet network using hubs. However, to implement this, you will need a dedicated EtherCAT® junction. Because of this, it is potentially costlier than ring or line. Also, this topology will be marginally slower than others, as there are more interstitial nodes that must repeat the message between the end nodes (e.g. for a EtherCAT® packet to go from master to slave, it must go through the junction/hub first, which will introduce a small delay). In fact, EtherCAT® star topology is not like traditional star topologies – it is actually a line topology in which data goes through junction port 1, reaches its end slave and comes back to the junction, and then comes through junction port 2 the same way. This topology is best for systems in locations with physical constraints that make it difficult to implement line or ring.
Searching for "EtherCAT junction" I found
https://www.beckhoff.com/english.asp?ethercat/ek1122.htm
which actually is the prodct that Eric Z mentioned, a 2-port EtherCAT junction. There are 8-port devices as well, https://www.beckhoff.com/english.asp?pc_cards_switches/cu1128.htm

Related

Qt modbus serial port flow control handling

I'm writing a small program using QModbusDevice over the serial port (using the QModbusRtuSerialMaster class) and have some problems.
One of the problems seems to be that the flow control of the serial port is incorrect. Checking in a serial port sniffer I see that a working client sets RTS on when it sends requests, and then RTS off to receive replies. When I use QModbusRtuSerialMaster to send messages that doesn't happen.
The message is sent correctly (sometimes, subject for another question) compared to the working client. It's just the control flow that doesn't work and which causes the servers to be unable to reply.
I have set the Windows port settings for the COM-port in question to hardware flow control but it doesn't matter, the sniffer still reports no flow control.
Is there a way to get QModbusRtuSerialMaster to set the flow control as I would like? Or is there a way to manually handle the flow control (which is what the working client does)? Or is the only solution to skip the Qt modbus classes and make up my own using the serial port directly?
A short summary of what I'm doing...
First the initialization of the QModbusRtuSerialMaster object:
QModbusDevice* modbusDevice = new QModbusRtuSerialMaster(myMainWindow);
modbusDevice->setConnectionParameter(QModbusDevice::SerialPortNameParameter, "COM3");
modbusDevice->setConnectionParameter(QModbusDevice::SerialParityParameter, QSerialPort::NoParity);
modbusDevice->setConnectionParameter(QModbusDevice::SerialBaudRateParameter, QSerialPort::Baud115200);
modbusDevice->setConnectionParameter(QModbusDevice::SerialDataBitsParameter, QSerialPort::Data8);
modbusDevice->setConnectionParameter(QModbusDevice::SerialStopBitsParameter, QSerialPort::OneStop);
modbusDevice->setTimeout(100);
modbusDevice->setNumberOfRetries(3);
modbusDevice->connectDevice();
Then how I send a request:
auto response = modbusDevice->sendReadRequest(QModbusDataUnit(QModbusDataUnit::Coils, 0, 1), 1);
QtModbus does not implement an automatic toggling for the RTS line because it expects your hardware to do it on its own (with a dedicated line instead).
This should be the case for most RS485 converters (even cheap ones). You would only need the RTS line if you have a separate transceiver like this one with a DE/~RE input.
If you were on Linux and had some specific hardware you could try to use the RS485 mode to toggle the RTS line for you automatically. But you don't seem to be on Linux and the supported hardware is certainly very limited.
You can also toggle the line manually with port.setRequestToSend(true), see here. But note that depending on the timing needs of the device you are talking too, this software solution might not be very reliable. This particular problem has been discussed at length here. Take a look at the links on my answer too, I made some benchmarks with libmodbus that show good results.
Enabling or disabling flow control on the driver won't have any effect on this issue because this is not actually a flow control problem but a direction control one. Modbus runs on two-wire half-duplex links very often, and that means you need a way to indicate which device is allowed to talk on the bus at all times. The RTS (flow control) from an RS232 port can be used for this purpose as a software workaround.
In the end, it would be much less of a headache if you just replace your transceiver with one that supports hardware direction control. If you have a serial port with an FTDI engine you should be able to use the TXEN line for this purpose. Sometimes this hardware line is not directly routed and available on a pin but you can reroute it with MProg.
I would like to highlight that you did not mention if you are running your Modbus on RS485. I guess it's fair to assume you are, but if you have only a couple of devices next to each other you might use RS232 (even on TTL levels) and forget about direction control (you would be running full-duplex with three wires: TX, RX and GND).

How to "publish" a large number of actors in CAF?

I've just learned about CAF, the C++ Actor Framework.
The one thing that surprised me is that the way to make an actor available over the network is to "publish" it to a specific TCP port.
This basically means that the number of actors that you can publish is limited by the number of ports you have ( 64k ). Since you need both one port to publish an actor and one port to access a remote actor, I assume that two processes would each be able to share at best about 32k actors each, while they could probably each hold a million actors on a commodity server. This would be even worse, if the cluster had, say, 10 nodes.
To make the publishing scalable, each process should only need to open 1 port, for each and every actor in one system, and open 1 connection to each actor system that they want to access.
Is there a way to publish one actor as a proxy for all actors in an actor system ( preferably without any significant performance loss )?
Let me add some background. The middleman::publish/middleman::remote_actor function pair does two things: connecting two CAF instances and giving you a handle for communicating to a remote actor. The actor you "publish" to a given port is meant to act as an entry point. This is a convenient rendezvous point, nothing more.
All you need to communicate between two actors is a handle. Of course you need to somehow learn new handles if you want to talk to more actors. The remote_actor function is simply a convenient way to implement a rendezvous between two actors. However, after you learn the handle you can freely pass it around in your distributed system. Actor handles are network transparent.
Also, CAF will always maintain a single TCP connection between two actor system. If you publish 10 actors on host A and "connect" to all 10 actors from host B via remote_actor, you'll see that CAF will initially open 10 connections (because the target node could run multiple actor system) but all but one connection will get closed.
If you don't care about the rendezvous for actors offered by publish/remote_actor then you can also use middleman::open and middleman::connect instead. This will only connect two CAF instances without exchanging actor handles. Instead, connect will return a node_id on success. This is all you need for some features. For example remote spawning of actors.
Is there a way to publish one actor as a proxy for all actors in an actor system ( preferably without any significant performance loss )?
You can publish one actor at a port that's sole purpose it is to model a rendezvous point. If that actor sends 1000 more actor handles to a remote actor this will not cause any additional network connections.
Writing a custom actor that explicitly models the rendezvous between multiple systems by offering some sort dictionary is the recommended way.
Just for the sake of completeness: CAF also has a registry mechanism. However, keys are limited to atom values, i.e., 10-characters-or-less. Since the registry is generic it also only stores strong_actor_ptr and leaves type safety to you. However, if that's all you need: you put handles to the registry (see actor_system::registry) and then access this registry remotely via middleman::remote_lookup (you only need a node_id to do this).
Smooth scaling with ( almost ) no limits is alpha & omega
One way, used in agent-based systems ( not sure if CAF has implemented tools for going this way ) is to use multiple transport-classes { inproc:// | ipc:// | tcp:// | .. | vmci:// } and thus be able to pick from, on an as needed basis.
While building a proxy may sound attractive, welding together two different actor-models one "atop" the other does not mean that it is as simple to achieve as it sounds ( eventloops are fragile to get tuned / blocking-prevented / event-handled in a fair manner - the do not like any other master to try to take their own Hat ... ).
In case CAF provides at the moment no other transport-means but TCP:
still one may resort to use O/S-level steps and measures and harness the features of the ISO-OSI-model up to the limits or as necessary:
sudo ip address add 172.16.100.17/24 dev eth0
or better, make the additional IP-addresses permanent - i.e. edit the file /etc/network/interfaces ( or Ubuntu ) and add as many stanzas, so that it looks like:
iface eth0 inet static
address 172.16.100.17/24
iface eth0 inet static
address 172.16.24.11/24
This way the configuration-space could get extended for cases the CAF does not provide any other means for such actors but the said TCP (address:port#)-transport-class.

Server Multicast - MFC CSocket - C++ - How to?

I'm creating my own server using some protocols : TCP-PULL ok, TCP-PUSH ok, UDP-PULL ok (but I can't serve two clients at the same time!), UDP-PUSH ok (same problem).
Now, I need to create my the last protocol : Multicast-PUSH, but I can't understand how it works and I really don't know how to code it in C++. I've read about join a group and that in multicast there's no connection, so bytes are sent even if anyone is connected.
I'm coding in C++, using MFC libraries and CSockets.
Could please someone help?
Thank's!!
Consider an example where one system needs to send the same information to multiple systems. How best to accomplish this? The obvious approach is to have a socket "connection" for each target system. When data is ready to be sent, the sender iterates over each "connection," transmitting the data to the target system. This iteration process has to occur every time a message is sent, and it has to be robust such that if a transmission fails for one system, it doesn't fail for the remaining systems. But the problem is really worse than that because typically all the systems in a multicast exchange which to transmit data. This means that each system has to have a "connection" to each and every system wishing to participate.
This is where multicast comes in. In multicast, the sender sends data once to a specialized IP address and port called the multicast group. From there the network equipment, e.g., routers, take care of forwarding the data to the other systems in the multicast group. To achieve this, all systems wishing to participate in the multicast exchange have to "join" the multicast group, which happens during socket initialization and is used to simply notify the network equipment that the system wishes to participate in the multicast exchange. There is a special range of IPv4 addresses used for multicast - 224.0.0.0 to 239.255.255.255. You must use an IP address within this range and a port number of your choosing in order for multicast to work correctly.
Check out the Multicast Wrapper Class at CodeProject for an example of how to do this in MFC.

Suggested network topology for this smallish data/command broadcasting application?

We're putting together a system that reads ~32 voltage signals through an analog-to-digital converter card, does some preliminary processing on them, and passes the results (still separated into 32 channels) to the network as UDP packets, where they are picked up by another computer and variously (a) displayed, (b) further processed, (c) searched for criteria to change the state of the acquisition system, or (d) some combination of A-C. Simultaneously a GUI process is running on the computer doing those latter processes (the vis computer), which changes state in both the data-generating computer and the vis computer's multiple processes, through UDP-packeted command messages.
I'm new to network programming and am struggling to pick a network topology. Are there any heuristics (or book chapters, papers) about network topology for relatively small applications than need to pass data, commands, and command acknowledgments flexibly?
System details:
Raw data acquisition happens on a single Linux box. Simply processing the data, saving to disk, and pushing to the network uses about 25% of the CPU capacity, and a tiny amount of memory. Less than 0.5 Mb/sec of data go to the network. All code for data-generation is in c++.
Another linux machine runs several visualization / processing / GUI processes. The GUI controls both the acquisition machine and the processes on the vis/processing/GUI computer itself. This code is mostly in c++, with a couple little utilities in Python.
We will be writing other applications that will want to listen in on the raw data, the processed data, and all the commands being passed around; those applications will want to issues commands as well - we can't anticipate how many such modules we want to write, but we expect 3 or 4 data-heavy processes that transform all 32 input streams into a single output; as well as 3 or 4 one-off small applications like a "command logger". The modularity requirement means that we want the old data-generators and command-issuers to be agnostic about how many listeners are out there. We also want commands to be acknowledged by their recipients.
The two machines are connected by a switch, and packets (both data and commands, and acknowledgments) are sent in UDP.
The five possibilities we're thinking of:
Data streams, commands, and acknowledgements are targeted by port number. The data-generator sends independent data streams as UDP packets to different port numbers bound by independent visualizer processes on the visualization computer. Each process also binds a listening port for incoming commands, and another port for incoming acknowledgments to outgoing commands. This option seems good because the kernel does the work of trafficing/filtering the packets; but bad because it's hard to see how processes address each other in the face of unpredicted added modules; it also seems to lead to an explosion of bound ports.
Data streams are targeted to their respective visualizers by port number, and each process binds a port for listening for commands. But all command-issuers send their commands to a packet-forwarder process which knows the command-in ports of all processes, and forwards each command to all of them. Acknowledgements are also sent to this universal-command-in port and forwarded to all processes. We pack information about the intended target of each command and each acknowedgment into the command/ack packets, so the processes themselves have to sift through all the commands/acks to find ones that pertain to them.
The packet-forwarder process is also the target of all data packets. All data packets and all command packets are forwarded to perhaps 40 different processes. This obviously puts a whole lot more traffic on the subnet; it also cleans up the explosion of bound ports.
Two packet-distributors could run on the vis computer - one broadcasts commands/ack's to all ports. The other broadcasts data to only ports that would possibly want data.
Our 32 visualization processes could be bundled into 1 process that draws data for the 32 signals, greatly reducing the extra traffic that option 3 causes.
If you've experimented with passing data around among multiple processes on a small number of machines, and have some wisdom or rules of thumb about which strategies are robust, I'd greatly appreciate the advice! (requests for clarification in the pics are welcome)
I don't have enough rep to move this question to programmers.stackexhange.com so I will answer it here.
First I will throw quite a few technologies at you, each of which you need to take a look at.
Hadoop A map-reduce framework. The ability to take a large sum of data, and process it across distributed nodes.
Kafka An extremely high performant messaging system. I would suggest looking at this as your message bus.
ZooKeeper A distributed system that would allow you to "figure out" all the different aspects of your distributed system. It's a coordination system that is distributed
Pub/Sub Messaging
∅mq Another socket library that allows pub/sub messaging and other N-to-N message passing arrangements.
Now that I've thrown a few technologies at you I'll explain what I would do.
Create a system that allows you to create N connectors. These connectors can handle Data/Command N in your diagram, where N is a specific signal. Meaning if you had 32 signals you can setup your system with 32 connectors to "connect". These connectors can handle two-way communications. Hence your receive/command problem. A single connector will publish it's data to something such as Kafka on a topic specific to that signal.
Use a publish/subscribe system. Essentially what happens is the connectors publish it's results to a specified topic. This topic is something you choose. Then processors, either UI, business logic, etc. listen on a specific topic. These are all arbitrary and you can set them up however you want.
============ ============= ===== ============ =============
= Signal 1= < --- > = Connector = < -- = K = --> = "signal 1" ---> = Processor =
============ ============= = a = ============ =============
= f =
============ ============= = k = ============ =============
= Signal 2= < --- > = Connector = < -- = a = --> = "signal 2" ---> = Processor =
============ ============= = = ============ | =============
= = |
============ ============= = = ============ |
= Signal 3= < --- > = Connector = < -- = = --> = "signal 3" ---
============ ============= ===== ============
In this example the first connector "publishes" it's results to topic "signal 1" in which the first processor is listening on that topic. Any data sent to that topic is sent to the first processor. The second processor is listening for both "signal 2" and "signal 3" data. This represents something like a User Interface retrieving different signals at the same time.
One thing to keep in mind is that this can happen across whatever topics you choose. A "processor" can listen to all topics if you deem it important.

controling individual pins on a serial port

I know that serial ports work by sending a single stream of bits in serial. I can write programs to send and receive data from that one pin.
However, there are a lot more other pins on the serial port connection that normal aren't used but from documentation all seem to have some sort of function for signalling as opposed to data transfer.
Is it possible in any way to cause the other pins that are not used for direct data transfer to be controlled individually? If so, how would i go about doing that?
EDIT: more information
I am working with a modern CPU running windows 7 64-bit on an intel core i7 870 processor. I'm using serial to usb ports because its imposable for me to do anything directly with a usb port and my computer does not come with serial ports and also for some inexplicable reason i have a bunch of these usb to serial port adapters lying around.
My goal is to control mutipul stepper motors (200 steps per rotation, 4 phase motors). My simple circuitry accepts single high pulses and interprets it as a command to cause the motor to rotate one step. The circuit itself will handle the power supply and phase switching. I wish to use the data transfer pin to send the rotation signals (we can control position and velocity by altering the number of high pulses and frequency of high pulses through the pin, however there is no real pulse width modulation).
I have many motors to control but they do not need to be controlled simultaneously. I hope to use the rest of the pins and run them through a simple combination logic circuit to identify which motor is being moved and which direction it is to move in. This is part of the power switching circuitry.
The data transfer pin will operate normally at some low end frequency. However, i want to control the other pins to allow me to give a solid on or off signal (they wont be flipping very quickly, only changes when i switch to controlling another motor).
Based of the suggestion of Hans Passant , I'd like to suggest that you use an Arduino instead of an USB-to-serial converter. The "Duemilanove" is an Arduino-based board that provides 6 PWM outputs (as well as 8 other digitial I/Os and 6 analog). Some more specialized boards might be even cheaper (Arduino Pro Mini, $15 in volume, some soldering required).
Using the handshaking pins to send data can work very well, though probably not on a multitasking OS, it's just very processor intensive (because the port needs to be polled constantly) and requires some custom cables. In fact, back in the day, this is exactly how Laplink got such high transfer rates over serial connections (and why to get those rates you needed a special 'Laplink' cable). And you need both sides of hte link to be aware of what's going on and be able to deal with the custom communications. Laplink would send a packet of data over both the normal UART pins while trying to send data from the other end of the packet over the handshaking pins. If the correct cable wasn't used (or there was some other problem with sending over the handshaking pins) there was no problem - all the data would just get send normally.
Embedded developers might know this as 'bit banging' - often on small embedded systems there's no dedicated UART circuitry - to get serial communications to work they have to toggle a general I/O pin with the correct timing. The same can be done on a UART's handshaking pins. But like I said, it can be detrimental to the system if other work needs to be done.
You can use DTR and RTS only, but that is four possible states. You do need to be careful that the device on the other end uses TTL levels. At he end of this link Serial there are tips on hardware if you need it.
What kind of data rate are you thinking of when you say high frequency? What kind of serial port do you have? With the old 9 pin connectors on the back of the computer the best you can do is around 115Kbps. With a USB adapter I have done test where I could push close to 1Mbps through the port.
Here's an article from Microsoft that goes into great detail on how to work with serial ports:
http://msdn.microsoft.com/en-us/library/ms810467.aspx
It mentions EscapeCommFunction for directly controlling the DTR line.
Before you check out this information, I'm joining in with the others that say a serial port is inappropriate for your application.
I´ve been trying to find an answer to your question for 3 hours, seems like there is no "simple way" to get a simple boolean signal from a computer...
But, there is always a way, and jet, as simple (maybe even stupid) as this may sound, have you considered using the audio jack connector as an output?, It is stereo so you would have 2 outputs available,the programming would is not that difficult. and you don#t need to buy expensive shit to make it work.
If you also need an input, just disassemble a mouse... and bridge the sensors to the servos, probably the most cheap and easiest way of doing it...
Another way would be using the leds for the Num-lock, caps-lock and the dspl-lock on the keyboard, these can be activated using software, and you just need to take a cheap external keyboard, and use the connectors for these 3 leds.
you are describing maybe a parallel port - where you can set bit patterns all at once - then toggle the xmit line to send it all...
Lets take a look from the "bottom up" point of view:
The serial port pins
Pins on the serial port may be connected to a "controller" or directly connected to the processor. In order for the processor to have access (control) the pins, there must be an electrical connection from the pins to the processor. If not, the processor nor the program can control the pins.
Using a serial controller
A controller, such as a USART, would be connected between the serial port and the processor. The controller may function as to convert 8 parallel data bits into serial bitstream. In the big picture, the controller must provide access to the port pins in order for them to be controlled. If it doesn't, the pins can't be accessed. The controller must be connected to the processor in order to control the pins if a controller is connected.
The Processor and the Serial port
Assuming that the pins you want to control are connected to the processor, the processor must be able to access them. Sometimes they are mapped as physical addresses (such as with an ARM processor), or they may be connected to a port (such as the intel 8086). A program would access the pins via a pointer or using a i/o instruction. In some processor, the i/o ports must be enabled and initialized before they can be used.
Support from the OS
Here's a big ticket item: If your platform has an Operating System, the Operating System must provide services to access the pins of the serial port. The services could be a driver or an API function call. If the OS doesn't provide services, you can't access the serial port pins.
Permission from the OS
Assuming the OS has support for the serial port, your program must now have permission to access the port. In some operating systems, permission may only be granted to root or drivers and not users. If your account does not have permission to access the pins, you are not going to read them.
Support from the Programming Language
Lastly, the programming language must have support for the port. If the language doesn't provide support for the port you may have to change languages, or even program in assembly.
Accessing the "unused" pins of a serial port require extensive research into the platform. Not all platforms have serial ports. Serial port access is platform dependent and may change across different platforms.
Ask another, more detailed question and you will get more detailed answers. Please provide the kind of platform and OS that you are using.