Trying to access RSSI information on ESP32 for datalogging - c++

As part of a project I'm comparing the effectiveness of different wireless communication methods for measuring distance. I am using RSSI for all forms (I'm aware it's imprecise but the extent of which is the point of the project). I'm planning on comparing Bluetooth Trad, BLE, Wi-Fi and ESP-NOW.
Currently Wi-Fi and ESP-NOW are working and I'm working on Bluetooth trad. I'm able to use the inbuilt examples to find my device and print it to the console. However, how can I access the data stored within BTScanResults.
For example the pseudo would be:
if name == "ESP32test":
Serial print rssi of name:
delay(1s)
The reason it needs to be in this format is the serial output is being taken directly into Microsoft Excel for data formatting and there will be thousands of data points so manual recording is not feasible.
Thanks for any help, Matt

I had made a little program to log all Blutooth devices advertised in the vicinity. To find the name and the RSSI I used:
String(advertisedDevice.getName().c_str()); to get the name
and
advertisedDevice.getRSSI(); to get the RSSI.
Below is how it looks in the actual code, with only the essentials left in for brevity. BleLog[] is just a struct array that holds a table of results.
It works in general but for some reason the RSSI is reported most of the time but not always.
class MyAdvertisedDeviceCallbacks : public BLEAdvertisedDeviceCallbacks
{
void onResult(BLEAdvertisedDevice advertisedDevice)
{
printResult(advertisedDevice);
parseResult(advertisedDevice);
.............
}
};
void parseResult(BLEAdvertisedDevice advertisedDevice)
{
......................
// Fill in the data for the log entry
BleLog[foundAddress].occurences = oldBleLog[foundAddress].occurences+1;
BleLog[foundAddress].lastRssi = advertisedDevice.getRSSI();
BleLog[foundAddress].lastSeen = millis();
BleLog[foundAddress].deviceName = String(advertisedDevice.getName().c_str());
BleLog[foundAddress].addressType = advertisedDevice.getAddressType();
...................
}

Related

How to track screens through time? [duplicate]

I have a setup with two regular displays and three projectors connected to a windows pc. In my win32 program I need to uniquely identify each monitor and store information for each such that I can retrieve the stored information even after computer restart.
The EnumDisplayDevices seems to return different device orders after restarting the computer. There is also GetPhysicalMonitorsFromHMONITOR which at least gives me the display's name. However, I need something like a serial number for my projectors, since they are the same model. How can I get such a unique identifier?
EDIT: This is the solution I came up with after reading the answer from user Anders (thanks!):
DISPLAY_DEVICEA dispDevice;
ZeroMemory(&dispDevice, sizeof(dispDevice));
dispDevice.cb = sizeof(dispDevice);
DWORD screenID;
while (EnumDisplayDevicesA(NULL, screenID, &dispDevice, 0))
{
// important: make copy of DeviceName
char name[sizeof(dispDevice.DeviceName)];
strcpy(name, dispDevice.DeviceName);
if (EnumDisplayDevicesA(name, 0, &dispDevice, EDD_GET_DEVICE_INTERFACE_NAME))
{
// at this point dispDevice.DeviceID contains a unique identifier for the monitor
}
++screenID;
}
EnumDisplayDevices with the EDD_GET_DEVICE_INTERFACE_NAME flag should give you a usable string. And if not, you can use this string with the SetupAPI to get the hardware id or driver key or whatever is unique enough for your purpose.
Set this flag to EDD_GET_DEVICE_INTERFACE_NAME (0x00000001) to retrieve the device interface name for GUID_DEVINTERFACE_MONITOR, which is registered by the operating system on a per monitor basis. The value is placed in the DeviceID member of the DISPLAY_DEVICE structure returned in lpDisplayDevice. The resulting device interface name can be used with SetupAPI functions and serves as a link between GDI monitor devices and SetupAPI monitor devices.

get the binary data transferred from grpc client

I am new to gRPC framework, and I have created a sample client-server on my PC (referring to this).
In my client-server application I have implemented a simple RPC
service NameStudent {
rpc GetRoll(RollNo) returns (Details) {}
}
The client sends a RollNo and receives his/her details which are name, age, gender, parent name, and roll no.
message RollNo{
int32 roll = 1;
}
message Details {
string name = 1;
string gender = 2;
int32 age = 3;
string parent = 4;
RollNo rollid = 5;
}
The actual server and client codes are adaptation of the sample code explained here
Now my server is able to listen to "0.0.0.0:50051(address:port)" and client is able to send the roll no on "localhost:50051" and receive the details.
I want to see the actual binary data that is transferred between client and server. i have tried using Wireshark, but I don't understand what I am seeing here.
Here is the screenshot of wireshark capture
And here are the details of highlighted entry from above screenshot.
Need help in understanding wireshark here, Or any other way that can be used to see the binary data.
Wireshark uses the port to determine how to decode the communication, and it doesn't know any protocol associated with 50051. So you need to configure it to treat this as HTTP.
Right click on a row and select "Decode As..." in the context menu.
Then set "Current" to "HTTP" or "HTTP2" (HTTP will generally auto-detect HTTP2) and hit "OK".
Then the HTTP/2 frames should be decoded. And if using a recent version of Wireshark, you may also see the gRPC frames decoded.
The whole idea of grpc is to HIDE that. Let's say we ignore that and you know what you're doing.
Look at https://en.wikipedia.org/wiki/Protocol_Buffers. gRPC uses Protocol Buffers for it's data representation. You might get a hint at the data you're seeing.
Two good starting points for a reverse engineer exercise are:
Start simple: compile a program that sends an integer. Understand it. Sniff it. Then compile a program that sends a string. Try several values. Once you understand it, pass to tacke the problem of understanding how's google sending your structure.
Use known data and do small variations: knowing what 505249... means is easier if you start knowing the data you're sending (as an example, send "Hello world" string; then change it to "Hella world"; see what changes on the coded sniff; also check that sending several times the same data produces the same sniffed output). Apply prior point: start simple, first empty string, then " ", then "a", then "b", etc. and then pass to complex and larger strings. Don't be affraid to start simple.

How does veins calculate RSSI in a Simple Path Loss Model?

We are working on an application based on Veins framework which needs RSSI value of received signal and the distance between sender and receiver.
We referred to the VeReMi project which also calculates RSSI value and sends it to upper level.
We compared our simulation result (RSSI vs Distance) with the VeReMi dataset and they look quite different. Can you help us to explain how RSSI is calculated and whether our result is normal?
In our application, we obtain the distance and rssi value by
auto distance = sender.getPosition().distance(receiverPos);
auto senderRSSI = sender.getRssi();
In the lower level, the rssi is set in the Decider80211p::processSignalEnd(AirFrame* msg) method as in the VeReMi project.
if (result->isSignalCorrect()) {
DBG_D11P << "packet was received correctly, it is now handed to upper layer...\n";
// go on with processing this AirFrame, send it to the Mac-Layer
WaveShortMessage* decap = dynamic_cast<WaveShortMessage*>(static_cast<Mac80211Pkt*>(frame->decapsulate())->decapsulate());
simtime_t start = frame->getSignal().getReceptionStart();
simtime_t end = frame->getSignal().getReceptionEnd();
double rssiValue = calcChannelSenseRSSI(start, end);
decap->setRSSI(rssiValue);
phy->sendUp(frame, result);
}
Regarding the simulation configuration, our config.xml differs from VeReMi and there is no the following lines in our case.
<AnalogueModel type="VehicleObstacleShadowing">
<parameter name="carrierFrequency" type="double" value="5.890e+9"/>
</AnalogueModel>.
The 11p specific parameters and NIP settings in the omnetpp.ini are the same.
Also, our simulation is based on Boston map.
The scatter plot of our simulation result of RSSI_vs_Distance is shown in the following figure.
RSSI vs Distance from our simulation shows that even at distance beyond 1000 meters we still have received signal with strong RSSI values
In comparison, we extract data from VeReMi dataset and plot the RSSI vs Distance which is shown in following pic.
VeReMi dataset RSSI vs Distance is what we were expecting where RSSI decreases as distance increases
Can you help us explain whether our result is normal and what may cause the issue we have now? Thanks!
I am not familiar with the VeReMi project, so I do not know what value it is referring to as "the RSSI" when a frame is received. The accompanying ArXiV paper paper mentions no more details than that "the RSSI of the receiver" is logged on frame receptions.
Cursory inspection of the code for logging the dataset you mentioned shows that, on every reception of a frame, a method is called that sums up the power levels of all transmissions currently present at the receiver.
From this, it appears quite straightforward that (a) how far a frame traveled when it arrives at the receiver has only little relation to (b) the total amount of power experienced by the receiver at this time.
If you are interested in the Received Signal Strength (RSS) of every frame received, there is a much simpler path you can follow: Taking Veins version 5 alpha 1 as an example, your application layer can access the ControlInfo of a frame and, from there, its RSS, e.g., as follows:
check_and_cast<DeciderResult80211*>(check_and_cast<PhyToMacControlInfo*>(wsm->getControlInfo())->getDeciderResult())->getRecvPower_dBm(). The same approach should work for Veins 4.6 (which, I believe, the VeReMi dataset you are referring to is based) as well.
In simulations that only use SimplePathlossModel, Veins' version of a free space path loss model, this will result in the familiar curve:

How to read the temperature from sensors on the motherboard?

Trying to get the temperature of the processor.
Have already tried using WQL (WMI class MSAcpi_ThermalZoneTemperature), but apparently it is not implemented for all the platforms yet. On most of the machines it simply returns every existing error message via HRESULT return value. The temperature itself is not returned.
The idea is to read this temperature directly via the bus port. I have found the library, which gives the functionality of outp and inp functions, and with it managed to start initiate the connection (NTPort), however, the question becomes which port to connect to in order to read the data. The microcontroller is IT8728F. SpeedFan (the application that is able to read the temperature) in its logs says that it reads the data from port 0x290. However, when connecting to it, the data that comes back does not look like temperature (it always returns 29).
So what the next thing that was tried was to read from whatever port, trying to determine if any data that came through looked like temperature. However, from every port some data came through that was either too low or too high to be the real temperature.
int CPU_TEMP;
Outp(INDEX, BANK_SET);
Outp(DATA, Inp(DATA)|0x01);
for(CPU_TEMP=0x0;CPU_TEMP<0x999;CPU_TEMP++)
{
Outp(INDEX, CPU_TEMP);
printf("CPU temp: %iC\n", Inp(DATA));
}
maybe your processor is 29 degrees Celsius and stable. That seems legit for a processor.
Also: is that 0x00000029 or DEC 29? that makes a big difference

C++ - Testing serial ports without a physical device

I have a program that splits a serial device into multiple virtual serial ports and routes all the data to them.
---- /dev/ttyS1.a [data]->
|
[data]-> /dev/ttyS1 ---- /dev/ttyS1.b [data]->
|
---- /dev/ttyS1.c [data]->
My working program (pseudo code for sake of readability and simplicity):
poll(...) {
// Route data from master to vsp
master.read(buf)
for(virtual serial ports as vsp) {
vsp.write(buf)
}
// Route data from vsp to master (if need be)
for(virtual serial ports as vsp) {
if(vsp.needs_to_write()) {
vsp.read(buf)
master.write(buf)
}
}
}
I have one physical serial port device on my machine that continuously feeds data through, which is how I tested if my program initially works, but I would like to write a test to emulate/simulate writing and reading both directions and verifying the data on both ends. Since the data I am receiving from my physical serial port device writes seemingly random data it is hard to verify what is going in is exactly what is being written.
How would I be able to do this? (pseudo code)
1. fork process that feeds a known char sequence into /dev/ttyS2 in a loop
2. use my program to read from the COM `master.read(buf)` and then write to the vsp `vsp.write(buf)`
3. how can I verify that after writing to the buf that the vsp has the correct data?
Any help is appreciated I am confused on how to automate testing this.
Edit 1:
No one can help?
I think you can use com0com to test. You can connect virtual serial ports and then write to one and read from other what it received.
enter link description here