when CCH channel encounter with congestion? - veins

I am using OMNeT++ 5.4.1, Veins 4.7.1, and SUMO 0.30.0. I sould solve congestion.
How can I understand capacity of CCH channel is more than 0.65% ?
Or how can I understand when I should control congestion?
I really appreciate any help.

In Veins 5.0, the 802.11p MAC layer class (Mac1609_4) emits a signal Mac1609_4::sigChannelBusy (see Mac1609_4.h line 72) whenever it senses that the channel turns from idle to busy or vice versa (appending a bool parameter to the signal to indicate which is which - see Mac1609_4.cc line 880). Your application can rely on this signal to determine when the channel is busy for more than, say, 65% during a given sampling interval.
Similar functionality exists in Veins 4.7.1 (see its Mac1609_4.cc line 952).

Related

Getting notified when an I2C-Value changes in C++

I recently started experimenting with I2C-Hardware on my raspberry pi. Following this tutorial Using the I2C interface I already know how to read and set values. However, the program I want to realize needs the current value on a specific address all the time. So, I made a thread and query the value constantly in a never ending loop, which seems primitive to me. Is it possible to get notified in an event-like manner when a value on an I2C-adress changes?
A platform independend solution would also be much welcomed.
I was able to get what I wanted.
I use the following repeater for the I2C-Bus: link and it turns out there is a soldering bridge (LB2) you can set that sets a signal on GPIO17 whenever a value on the I2C-Bus changes since it has last been changed. I can now listen on this events accordingly.
Generally speaking, the I2C bus has no interrupt capability. So with only I2C, all you can do is poll the chip for a certain event to happen or value to change.
Most chips do have an interrupt line (sometimes even more than one) that can be programmed to trigger on certain events. The behavior of this line depends on the chip. Usually it needs to be enabled (using I2C commands) and it needs to be linked to a GPIO input line. For these, interrupt support is available.

NTPD synchronization with 1PPS signal

I have an AHRS (attitude heading reference system) that interfaces with my C++ application. I receive a 50Hz stream of messages via Ethernet from the AHRS, and as part of this message, I get UTC time. My system will also have NTPD running as the time server for our embedded network. The AHRS also has a 1PPS output that indicates the second roll-over time for UTC. I would like to synchronize the NTPD time with the UTC. After some research, I have found that there are techniques that utilize a serial port as input for the 1PPS. From what I can find, these techniques use GPSD to read the 1PPS and communicate with NTPD to synchronize the system time. However, GPSD is expecting a NMEA formatted message from a GPS. I don't have that.
The way I see it now, I have a couple of optional approaches:
Don't use GPSD. Write a program that reads the 1PPS and the Ethernet
message contain UTC, and then somehow communicates this information
to NTPD.
Use GPSD. Write a program that repackages the Ethernet message into
something that can be sent to GPSD, and let it handle the
interaction with NTPD.
Something else?
Any suggestions would be very much appreciated.
EDIT:
I apologize for this poorly constructed question.
My solution to this problem is as follows:
1 - interface 1PPS to RS232 port, which as it turns out is a standard approach that is handled by GPSD.
2 - write a custom C++ application to read the Ethernet messages containing UTC, and from that build an NMEA message containing the UTC.
3 - feed the NMEA message to GPSD, which in turn interfaces with NTPD to synchronize the GPS/1PPS information with system time.
I dont know why you would want drive a PPS device with a signal that is delivered via ethernet frames. Moreover PPS does not work the way you seem to think it does. There is no timecode in a PPS signal so you cant sync the time to the PPS signal. The PPS signal is simply used to inform the computer of how long a second is.
there are examples that show how a PPS signal can be read in using a serial port, e.g. by attaching it to an interrupt capable pin - that might be RingIndicator (RI) or something else with comparable features. the problem i am seeing there is that any sort of code-driven service of an interrupt has its latencys and jitter. this is defined by your system design (and if you are doing it, by your own system tailored special interrupt handler routine - on a PC even good old ISA bus introduced NMI handlers might see such effects).
to my best understanding people that are doing time sync on a "computer" are using a true hardware timer-counter (with e.g. 64 bits) and a latch that gets triggered to sample and hold the value of the timer on every incoming 1PPS pulse. - folks are doing that already with PTP over the ethernet with the small variation that a special edge of the incoming data is used as the trigger and by this sender and receiver can be synchronized using further program logic that grabs the resulting value from the built in PTP-hardware-latch.
see here: https://en.wikipedia.org/wiki/Precision_Time_Protocol
along with e.g. 802.1AS: http://www.ieee802.org/1/pages/802.1as.html
described wikipedia in section "Related initiatives" as:
"IEEE 802.1AS-2011 is part of the IEEE Audio Video Bridging (AVB) group of standards, further extended by the IEEE 802.1 Time-Sensitive Networking (TSN) Task Group. It specifies a profile for use of IEEE 1588-2008 for time synchronization over a virtual bridged local area network (as defined by IEEE 802.1Q). In particular, 802.1AS defines how IEEE 802.3 (Ethernet), IEEE 802.11 (Wi-Fi), and MoCA can all be parts of the same PTP timing domain."
some article (in German): https://www.elektronikpraxis.vogel.de/ethernet-fuer-multimediadienste-im-automobil-a-157124/index4.html
and some presentation: http://www.ieee802.org/1/files/public/docs2008/as-kbstanton-8021AS-overview-for-dot11aa-1108.pdf
my rationale to your question is:
yes its possible. but it is a precision limited design due to the various internal things like latency and jitter of the interrupt handler you are forced to use. the achievable overall precision per pulse and in a long term run is hard to say but might be in the range of some 10 ms at startup with a single pulse to maybe/guessed 0,1 ms. - doing it means proving it. long term observations should help you unveiling the true practical caps with your very specific computer and selected software environment.

Is it possible to get more precision in wireless signal strength?

So in windows/linux, you are able to get a wireless hotspot's signal strength in dbms. For example, right now I have a signal strength of -64 dbms in my connection.
Now the problem I have is that the signal strength values I receive are always integers. For example, when I query the device for signal strength, it never gives me a value of -64.5 dbms.
My question is: what would be a good approach to get this precision? (OS does not matter). Should I be programming/modifying the drivers of my wireless transceiver? I don't really think that this is a limitation of the transceiver itself, but something somewhere is rounding the number.
Thank you very much!
After heavy research I realized that the problem is not only the driver. The device itself had limited capabilities of getting the signal strength. Most of regular wifi adapters do.

how to get 3g modem signal strength in c++ linux?

without using AT commands how can we get signal strength of 3g modem? The gdbus object for NetworkManager don't have any method like getSignalStrength.
Network manager is locking device file preventing to use AT commands.
nm-applet was able to display signal strength in system tray. so there should be a way to get signal strength form network manager!
nmcli is command line counter part of nm-applet. Can i get signal strength using nmcli? nothing about signal strength is mentioned in its man pages.
Finally got the answer!
In c++ use libnm-glib to act on dbus proxy. From command line use..
gdbus call --system --dest org.freedesktop.ModemManager --object-path /org/freedesktop/ModemManager/Modems/0 --method org.freedesktop.ModemManager.Modem.Gsm.Network.GetSignalQuality
gives u the signal strength of gsm modem.
If MM says it cannot get signal quality while connected, it's because
there is only one AT port for all command and data. So when the AT port
is connected, no AT commands can be sent to gather signal quality.
You'll need to either get a better modem with more AT ports, or switch
to a non-AT modem, like a QMI or MBIM powered one. -- Aleksander Morgado
One can listen to org.freedesktop.ModemManager.Modem.Gsm.Network.GetSignalQuality signal using
gdbus monitor --system --dest org.freedesktop.NetworkManager --object-path /org/freedesktop/NetworkManager/Modems/0
Q. Does the proxy signals everytime there is a change in signal strength asynchonously or ModemManager polls modem periodically to get signal quality?
A. That depends on the modem being used; if the modem supports unsolicited
quality change indications, we'll use them; otherwise MM will poll every
30s for signal quality values. The property values in the interface will
be updated once we get the new values (more or less). -- Aleksander Morgado
NetworkManager uses ModemManager for mobile broadband modem control. Instead of looking at the NetworkManager DBus APIs, you can look at the ModemManager ones, which will actually expose the connection/registration details, including signal quality.
If targeting to develop an application using C++ to gather information from the modem, I'd suggest to use libmm-glib (GLib-based library) to access the ModemManager DBus API transparently (i.e. without needing to know DBus).
I was working on a homework, in which I had to determine the indoor location by means of signal strengths of access points in a building. I was using
iwlist wlan0 scanning
command in order to see get the signal strength of the access points nearby. Then I was processing the output of it in Bash and redirecting it to C++ executable file, which is easy in Bash. I hope it will help you.

Qt TCP server/multi-client message reading

I am working on a client/server application (using qt for tcp).
The clients have to send about 15 messages per second to the server.
The problem is this:
the messages from the clients are received in groups. What i mean:
when i get the readyRead() signal and i read the data from the socket, there are multiple messages in the buffer.
This of-course causes lag in the system.
I tried putting the incoming connections in separate threads (thread per connection) but there was no improvement.
I also tried to rise a thread each time i got a readyRead() signal, but again nothing...
BUT when i run a number of clients on the same pc as the server, everything seems ok. When using different pc's over the network, the lag occurs...
(the network used is 100Mbps LAN, the messages are <200KB, and ping between pc's is <5msec, so i don't believe it's a network issue)
On the client side, the code to write the data is pretty simple:
tcpSocket->write(message.toUtf8());
tcpSocket->waitForBytesWritten();
tcpSocket->flush();
I also tried it without flush() or waitForBytesWritten() but the same...
EDIT: Using Qt 4.8.4 and Windows 7 and XP
Anybody has any idea how to overcome this?
Thank you in advance!
The last time I ran into a similar problem was with the stdin/stdout communication of a QProcess of Qt3.3. It behaved completely different on Linux and Windows.
Finally we found out that on Linux it used select() to react asynchronously when data arrived (fast, in most cases only one line readable) while on Windows the existence of new data was polled via a QTimer from the Qt mainloop (large delay, several messages available). A workaround we tried was to reduce the timer period in the source of Qt, but at the end we switched to shared memory based on the native OS mechanisms.
Your description sounds like you are using a similar Qt version on a Windows OS.