How does veins calculate RSSI in a Simple Path Loss Model? - veins

We are working on an application based on Veins framework which needs RSSI value of received signal and the distance between sender and receiver.
We referred to the VeReMi project which also calculates RSSI value and sends it to upper level.
We compared our simulation result (RSSI vs Distance) with the VeReMi dataset and they look quite different. Can you help us to explain how RSSI is calculated and whether our result is normal?
In our application, we obtain the distance and rssi value by
auto distance = sender.getPosition().distance(receiverPos);
auto senderRSSI = sender.getRssi();
In the lower level, the rssi is set in the Decider80211p::processSignalEnd(AirFrame* msg) method as in the VeReMi project.
if (result->isSignalCorrect()) {
DBG_D11P << "packet was received correctly, it is now handed to upper layer...\n";
// go on with processing this AirFrame, send it to the Mac-Layer
WaveShortMessage* decap = dynamic_cast<WaveShortMessage*>(static_cast<Mac80211Pkt*>(frame->decapsulate())->decapsulate());
simtime_t start = frame->getSignal().getReceptionStart();
simtime_t end = frame->getSignal().getReceptionEnd();
double rssiValue = calcChannelSenseRSSI(start, end);
decap->setRSSI(rssiValue);
phy->sendUp(frame, result);
}
Regarding the simulation configuration, our config.xml differs from VeReMi and there is no the following lines in our case.
<AnalogueModel type="VehicleObstacleShadowing">
<parameter name="carrierFrequency" type="double" value="5.890e+9"/>
</AnalogueModel>.
The 11p specific parameters and NIP settings in the omnetpp.ini are the same.
Also, our simulation is based on Boston map.
The scatter plot of our simulation result of RSSI_vs_Distance is shown in the following figure.
RSSI vs Distance from our simulation shows that even at distance beyond 1000 meters we still have received signal with strong RSSI values
In comparison, we extract data from VeReMi dataset and plot the RSSI vs Distance which is shown in following pic.
VeReMi dataset RSSI vs Distance is what we were expecting where RSSI decreases as distance increases
Can you help us explain whether our result is normal and what may cause the issue we have now? Thanks!

I am not familiar with the VeReMi project, so I do not know what value it is referring to as "the RSSI" when a frame is received. The accompanying ArXiV paper paper mentions no more details than that "the RSSI of the receiver" is logged on frame receptions.
Cursory inspection of the code for logging the dataset you mentioned shows that, on every reception of a frame, a method is called that sums up the power levels of all transmissions currently present at the receiver.
From this, it appears quite straightforward that (a) how far a frame traveled when it arrives at the receiver has only little relation to (b) the total amount of power experienced by the receiver at this time.
If you are interested in the Received Signal Strength (RSS) of every frame received, there is a much simpler path you can follow: Taking Veins version 5 alpha 1 as an example, your application layer can access the ControlInfo of a frame and, from there, its RSS, e.g., as follows:
check_and_cast<DeciderResult80211*>(check_and_cast<PhyToMacControlInfo*>(wsm->getControlInfo())->getDeciderResult())->getRecvPower_dBm(). The same approach should work for Veins 4.6 (which, I believe, the VeReMi dataset you are referring to is based) as well.
In simulations that only use SimplePathlossModel, Veins' version of a free space path loss model, this will result in the familiar curve:

Related

ParaView: Live point cloud visualization plugin

I am writing a ParaView version 5.1.2 plugin in C++ to visualize point cloud data produced by a LiDAR sensor. I noticed that Velodyne has an open source ParaView custom application to visualize their LiDAR data called Veloview. I tweaked some of their code to start but I am stuck now.
So far I wrote a reader that takes a pcap file and renders a point cloud that can be played back frame by frame. I also wrote a ParaView source that listens on a port and captures udp packets and after they are captured uses the reader to split them into frames and visualize the PC.
Now I would like to take live udp packets and render the point cloud in real time as each frame is completed.
I am having trouble accomplishing this because of the ParaView plugin structure. Currently, my reader displays a frame when the method RequestData is called. My method looks something like this.
int RequestData(vtkInformation *request, vtkInformationVector **inputVector, vtkInformationVector *outputVector){
vtkPolyData* output = vtkPolyData::GetData(outputVector);
vtkInformation* info = outputVector->GetInformationObject(0);
int timestep = 0;
if (info->Has(vtkStreamingDemandDrivenPipeline::UPDATE_TIME_STEP()))
{
double timeRequest = info->Get(vtkStreamingDemandDrivenPipeline::UPDATE_TIME_STEP());
int length = info->Length(vtkStreamingDemandDrivenPipeline::TIME_STEPS());
timestep = static_cast<int>(floor(timeRequest + 0.5));
}
this->Open();
// GetFrame returns a vtkSmartPointer<vtkPolyData> that is the frame
output->ShallowCopy(this->GetFrame(timestep));
this->Close();
return 1;
}
The RequestData method is called every time the timestep is updated in the ParaView gui. Then the frame from that timestep is copied into the outputVector.
I am not sure how to implement this with live data because in that circumstance the RequestData method is not called because no timesteps are requested. I saw there is a way to keep RequestData executing by using CONTINUE_EXECUTING() in this way.
request->Set(vtkStreamingDemandDrivenPipeline::CONTINUE_EXECUTING(), 1);
But I do not know if that is supposed to be used to visualize live data.
For now I am interested in simply reading live packets and throwing them away as soon as their frame is rendered. Does anyone know how I can achieve this?
In the code of VeloView (which basically is a bundled ParaView+LidarPlugin), the timesteps of ParaView is changed by the main code, not the Lidar Plugin.
We advice you to start from VeloView code, which is much closer to your goal.
If you really want to start from scratch within ParaView, you need to increment this requested timestep yourself.
Newest version of VeloView (unreleased) uses the same mechanism as ParaView “LiveSource” plugin (available in 5.6+), where the plugin tells ParaView to set a QtTimer that will automatically increment the available and requested timesteps.
request->Set(vtkStreamingDemandDrivenPipeline::CONTINUE_EXECUTING(), 1); relates to another mechanism that will run request Data multiple time, but won’t take care of updating the requested timestep.
Best,
Bastien Jacquet
VeloView project leader

How to send a code to the parallel port in exact sync with a visual stimulus in Psychopy

I am new to python and psychopy, however I have vast experience in programming and in designing experiments (using Matlab and EPrime). I am running an RSVP (rapid visual serial presentation) experiment with displays a different visual stimuli every X ms (X is an experimental variable, can be from 100 ms to 1000 ms). As this is a physiological experiment, I need to send triggers over the parallel port exactly on stimulus onset. I test the sync between triggers and visual onset using an oscilloscope and photosensor. However, when I send my trigger before or after the win.flip(), even with the window waitBlanking=False parameter then I still get a difference between the onset of the stimuli and the onset of the code.
Attached is my code:
im=[]
for pic in picnames:
im.append(visual.ImageStim(myWin,image=pic,pos=[0,0],autoLog=True))
myWin.flip() # to get to the next vertical blank
while tm < and t &lt len(codes):
im[tm].draw()
parallel.setData(codes[t]) # before
myWin.flip()
#parallel.setData(codes[t]) # after
ttime.append(myClock.getTime())
core.wait(0.01)
parallel.setData(0)
dur=(myClock.getTime()-ttime[t])*1000
while dur < stimDur-frameDurAvg+1:
dur=(myClock.getTime()-ttime[t])*1000
t=t+1
tm=tm+1
myWin.flip()
How can I sync my stimulus onset to the trigger? I'm not sure if this is a graphics card issue (I'm using a LCD ACER screen with the onboard Intel graphics card). Many thanks,
Shani
win.flip() waits for next monitor update. This means that the next line after win.flip() is executed almost exactly when the monitor begins drawing the frame. That's where you want to send your trigger. The line just before win.flip() is potentially almost one frame earlier, e.g. 16.7 ms on a 60Hz monitor so your trigger would arrive too early.
There are two almost identical ways to do it. Let's start with the most explicit:
for i in range(10):
win.flip()
# On the first flip
if i == 0:
parallel.setData(255)
core.wait(0.01)
parallel.setData(0)
... so the signal is sent just after the image has been pushed to the monitor.
The slightly more timing-accurate way to do it will save you like 0.01 ms (plus minus an order of magnitude). Somewhere early in the script define
def sendTrigger(code):
parallel.setData(code)
core.wait(0.01)
parallel.setData(0)
Then do
win.callOnFlip(sendTrigger, code=255)
for i in range(10):
win.flip()
This will call the function just after the first flip, before psychopy does a bit of housecleaning. So the function could have been called win.callOnNextFlip since it's only executed on the first following flip.
Again, this difference in timing is so miniscule compared to other factors that this is not really a question of a performance but rather of style preferences.
There is a hidden timing variable that is usually ignored - the monitor input lag, and I think this is the reason for the delay. Put simply, the monitor needs some time to display the image even after getting the input from the graphics card. This delay has nothing to do with the refresh rate (how many times the screen switches buffer), or the response time of the monitor.
In my monitor, I find a delay of 23ms when I send a trigger with callOnFlip(). How I correct it is: floor(23/16.667) = 1, and 23%16.667 = 6.333. So I call the callOnFlip on the second frame, wait 6.3 ms and trigger the port. This works. I haven't tried with WaitBlanking=True, which waits for the blanking start from the graphics card, as that gives me some more time to prepare the next buffer already. However, I think that even with WaitBlanking=True the effect will be there. (More after testing!)
Best,
Suddha
There is at least one routine that you can use to normalized the trigger delay to your screen refreshing rate. I just tested it with a photosensor cell and I went from a mean delay of 13 milliseconds (sd = 3.5 ms) between the trigger and the stimulus display, to a mean delay of 4.8 milliseconds (sd = 3.1 ms).
The procedure is the following :
Compute the mean duration between two displays. Say your screen has a refreshing rate of 85.05 (this is my case). This means that there is mean duration of 1000/85.05 = 11.76 milliseconds between two refreshes.
Just after you called win.flip(), wait for this averaged delay before you send your trigger : core.wait(0.01176).
This will not ensure that all your delays now equal zero, since you cannot master the synchronization between the win.flip() command and the current state of your screen, but it will center the delay around zero. At least, it did for me.
So the code could be updated as following :
refr_rate = 85.05
mean_delay_ms = (1000 / refr_rate)
mean_delay_sec = mean_delay_ms / 1000 # Psychopy needs timing values in seconds
def send_trigger(port, value):
core.wait(mean_delay_sec)
parallel.setData(value)
core.wait(0.001)
parallel.setData(0)
[...]
stimulus.draw()
win.flip()
send_trigger(port, value)
[...]

How to read the temperature from sensors on the motherboard?

Trying to get the temperature of the processor.
Have already tried using WQL (WMI class MSAcpi_ThermalZoneTemperature), but apparently it is not implemented for all the platforms yet. On most of the machines it simply returns every existing error message via HRESULT return value. The temperature itself is not returned.
The idea is to read this temperature directly via the bus port. I have found the library, which gives the functionality of outp and inp functions, and with it managed to start initiate the connection (NTPort), however, the question becomes which port to connect to in order to read the data. The microcontroller is IT8728F. SpeedFan (the application that is able to read the temperature) in its logs says that it reads the data from port 0x290. However, when connecting to it, the data that comes back does not look like temperature (it always returns 29).
So what the next thing that was tried was to read from whatever port, trying to determine if any data that came through looked like temperature. However, from every port some data came through that was either too low or too high to be the real temperature.
int CPU_TEMP;
Outp(INDEX, BANK_SET);
Outp(DATA, Inp(DATA)|0x01);
for(CPU_TEMP=0x0;CPU_TEMP<0x999;CPU_TEMP++)
{
Outp(INDEX, CPU_TEMP);
printf("CPU temp: %iC\n", Inp(DATA));
}
maybe your processor is 29 degrees Celsius and stable. That seems legit for a processor.
Also: is that 0x00000029 or DEC 29? that makes a big difference

RTP timestamp not linear?

I was trying to reconstruct an audio conversation (a-b call using g711 audio) using the rtp time-stamp. I used to fill silence using difference of two rtp time-stamp and sampling rate. The conversation went out of sync and then I see that rtp time-stamp is not linear.I was not able to get exact clock time using rtp time-stamp and resulted in sync issues. How do i calculate the exact time.
I have the same problem with a Stream provided by GStreamer, whic doesnt provide monotonic timestamps.
for Example: The Difference between the stamps should bei exactly 1920, but it is between ~120 and ~3500, but in average 1920.
The problem here is that there is no way to find missing samples, because you never know if the high difference is from the Encoder delay or from a sample missing.
If you have only Audio to decode, I would try to put "valid" PTS values to each sample (in my case basetime+1920, basetime+3840 and so on.)
The big problem here comes when video AND audio were combined. Here this trick doesnt work well, when samples are missing and there is no way to find out when this is the case :(
when you want to send rtp you should notice about two things:
the time stamp is incremented due to the amout of byte sents.
e.g for PT=10, you may have this pattern:
1160 byte , time stamp increment: 1154 and wait 26 ms
lets see how this calculation happens:
. number of packet should be sent in one second : 1/(26ms) = 38
time stamp increment : clockrate / # = 1154
Regarding to RFC3550 (https://www.ietf.org/rfc/rfc3550.txt)
The sampling instant MUST be derived from a clock that increments
monotonically
Its not a choice nor an option. By the way please read the full description of the timestamp field of the RTP packet, there I found it also:
As an example, for fixed-rate audio
the timestamp clock would likely increment by one for each
sampling period. If an audio application reads blocks covering
160 sampling periods from the input device, the timestamp would be
increased by 160 for each such block, regardless of whether the
block is transmitted in a packet or dropped as silent.
If you want to check linearity then use the RTCP SR RTP and NTP timestamps field. At the SR report the RTP timestamp belongs to the NTP timestamp.
So the difference of two consecutive RTP timestamp (lets call them dRTPt_1, dRTP_2, ...) and the difference of two consecutive NTP timestamps (lets call them dNTP_1, dNTP_2, ...) and then multiply dRTP_t* with clock rate and check weather you get dNTP_t*.
But first please read the RFC.

in depth explanation of the side effects interface in clojure overtone generators

I an new to overtone/supercollider. I know how sound forms physically. However I don't understand the magic inside overtone's sound generating functions.
Let's say I have a basic sound:
(definst sin-wave [freq 440 attack 0.01 sustain 0.4 release 0.1 vol 0.4]
(* (env-gen (lin-env attack sustain release) 1 1 0 1 FREE)
(+ (sin-osc freq)
(sin-osc (* freq 2))
(sin-osc (* freq 4)))
vol))
I understand the ASR cycle of sound envelope, sin wave, frequency, volume here. They describe the amplitude of the sound over time. What I don't understand is the time. Since time is absent from the input of all functions here, how do I control stuffs like echo and other cool effects into the thing?
If I am to write my own sin-osc function, how do I specify the amplitude of my sound at specific time point? Let's say my sin-osc has to set that at 1/4 of the cycle the output reaches the peak of amplitude 1.0, what is the interface that I can code with to control it?
Without knowing this, all sound synth generators in overtone doesn't make sense to me and they look like strange functions with unknown side-effects.
Overtone does not specify the individual samples or shapes over time for each signal, it is really just an interface to the supercollider server (which defines a protocol for interaction, of which the supercollider language is the canonical client to this server, and overtone is another). For that reason, all overtone is doing behind the scenes is sending signals for how to construct a synth graph to the supercollider server. The supercollider server is the thing that is actually calculating what samples get sent to the dac, based on the definitions of the synths that are playing at any given time. That is why you are given primitive synth elements like sine oscillators and square waves and filters: these are invoked on the server to actually calculate the samples.
I got an answer from droidcore at #supercollider/Freenode IRC
d: time is really like wallclock time, it's just going by
d: the ugen knows how long each sample takes in terms of milliseconds, so it knows how much to advance its notion of time
d: so in an adsr, when you say you want an attack time of 1.0 seconds, it knows that it needs to take 44100 samples (say) to get there
d: the sampling rate is fixed and is global. it's set when you start the synthesis process
d: yeah well that's like doing a lookup in a sine wave table
d: they'll just repeatedly look up the next value in a table that
represents one cycle of the wave, and then just circle around to
the beginning when they get to the end
d: you can't really do sample-by sample logic from the SC side
d: Chuck will do that, though, if you want to experiment with it
d: time is global and it's implicit it's available to all the oscillators all the time
but internally it's not really like it's a closed form, where you say "give me the sample for this time value"
d: you say "time has advanced 5 microseconds. give me the new value"
d: it's more like a stream
d: you don't need to have random access to the oscillators values, just the next one in time sequence