INET EnergyConsumer module's parameters new value never takes effect at runtime - c++

I would request your help on this issue I'm facing on. Actually, I want to change the energyConsumer module's parameters at run time.
The scenario is that I should specify an amount of energy when the nodes are transmitting or receiving depending on the state of the radio.
I came through with the help of this piece of code. Thanks to that, the value of the designated parameters are well settled but the problem is that they never took effect as wished.
The parameterization of the energyConsumer module is s follow :
cModule *module = getGrandParentModule->getParentModule()->getSubmodule("node",myCHIndex);
cModule *energyConsumerCHModule = module->getSubmodule("wlan",0)->getSubmodule("radio")->getSubmodule("energyConsumer");
StateBasedEpEnergyConsumer *energconsumption = check_and_cast<StateBasedEpEnergyConsumer *>(energyConsumerCHModule);
cModule *chmodule = module->getSubmodule("generic")->getSubmodule("np");
Sim *chsim = check_and_cast<Sim *>(chmodule);
cPar *receivingPower = &energconsumption->par("receiverReceivingPowerConsumption");
EV_DEBUG <<"MY CLUSTER HEAD RECEIVING POWER IS : "<<receivingPower->doubleValue()<<endl;
EV_DEBUG <<"NEW VALUE CALCULATED : "<<((chsim->EnergyCH_RX)*bitrate)/(K)<<endl;
receivingPower->setDoubleValue(((chsim->EnergyCH_RX)*bitrate)/(K));
EV_DEBUG <<"MY CLUSTER HEAD RECEIVING POWER APPLIED IS : "<<receivingPower->doubleValue()<<endl;
EV_DEBUG <<"**************************************************************"<<endl;
cPar *transmittingPower = &energyconsumption->par("transmitterTransmittingPowerConsumption");
EV_DEBUG <<"MY CURRENT TRANSMITTING POWER IS : "<<&energyconsumption->par("transmitterTransmittingPowerConsumption")<<endl;
EV_DEBUG <<"NEW VALUE CALCULATED : "<<(EnergyToCH*bitrate)/(K)<<endl;
transmittingPower->setDoubleValue((EnergyToCH*bitrate)/(K));
EV_DEBUG <<"MY CURRENT TRANSMITTING POWER APPLIED IS : "<<&energyconsumption->par("transmitterTransmittingPowerConsumption")<<endl;
THE OUTPUT IS AS PER THE IMAGES :
But as you can see, no energy consumption happens (the energyStorage module still keeps the initial energy provided, 0.5J) despite there being some well-defined value.
THE NEXT IMAGE IS THE STATE OF THE ENERGY STORAGE MODULE:
Could you please advise on how should I process it?
Many thanks for your support...

The StateBasedEpEnergyConsumer module is not written in a way that it expects that certain parameters are changing during runtime (in fact almost none of the INET modules are). A parameter is usually read only once (usually during the initialization phase). After that point they usually store the value of the parameter in an internal member variable and they do not care about any change in the future. (you can check that in StateBasedEpEnergyConsumer.cc).
If you want to change the energy consumption values during runtime, you should write your own EnergyConsumer module by implementing the power::IEpEnergyConsumer interface. (or modify the current implementation by adding a handleParameterChange() method to store the actual new value in the member variable.

Related

Erlang - How is the creation integer (a part of a distributed pid representation ) actually created?

In a distributed Erlang system pids can have two different representations: i) internal; ii) external.
The internal representation has the following shape: < A.B.C >
The external representation, used for instance when a message has to travel across different nodes, is instead composed of the following elements: < node_id, ID, serial, creation > according to the official documentation.
Where node_id is the name of the node, ID and serial identify the process on node_id and creation is an integer used to distinguish the node from past (crashed) version of itself.
What I could not find is how the creation integer is created by the VM.
By setting a small experiment on my PC, I have seen that if I create and kill the same node several times the counter is always increased by 1, and by creating the same node on different machines, the creation integers are different, but have some similarities in their structure, for instance:
machine 1 -> creation integer = 1647595383
machine 2 -> creation integer = 1647596018
Do any of you have any knowledge about how this integer is created? If so could you please explain it to me and possibly reference some (more or less) official documentation?
The creation is sent as a part of the response to node registration in epmd, see details on that protocol.
If you have a custom erl_epmd module, you can also provide your own way of creating the creation-value.
The original creation is the local time of when the node with that name is first registered, and then it is bumped once for each time the name is re-registered.

TopologyTestDriver with streaming groupByKey.windowedBy.reduce not working like kafka server [duplicate]

I'm trying to play with Kafka Stream to aggregate some attribute of People.
I have a kafka stream test like this :
new ConsumerRecordFactory[Array[Byte], Character]("input", new ByteArraySerializer(), new CharacterSerializer())
var i = 0
while (i != 5) {
testDriver.pipeInput(
factory.create("input",
Character(123,12), 15*10000L))
i+=1;
}
val output = testDriver.readOutput....
I'm trying to group the value by key like this :
streamBuilder.stream[Array[Byte], Character](inputKafkaTopic)
.filter((key, _) => key == null )
.mapValues(character=> PersonInfos(character.id, character.id2, character.age) // case class
.groupBy((_, value) => CharacterInfos(value.id, value.id2) // case class)
.count().toStream.print(Printed.toSysOut[CharacterInfos, Long])
When i'm running the code, I got this :
[KTABLE-TOSTREAM-0000000012]: CharacterInfos(123,12), 1
[KTABLE-TOSTREAM-0000000012]: CharacterInfos(123,12), 2
[KTABLE-TOSTREAM-0000000012]: CharacterInfos(123,12), 3
[KTABLE-TOSTREAM-0000000012]: CharacterInfos(123,12), 4
[KTABLE-TOSTREAM-0000000012]: CharacterInfos(123,12), 5
Why i'm getting 5 rows instead of just one line with CharacterInfos and the count ?
Doesn't groupBy just change the key ?
If you use the TopologyTestDriver caching is effectively disabled and thus, every input record will always produce an output record. This is by design, because caching implies non-deterministic behavior what makes itsvery hard to write an actual unit test.
If you deploy the code in a real application, the behavior will be different and caching will reduce the output load -- which intermediate results you will get, is not defined (ie, non-deterministic); compare Michael Noll's answer.
For your unit test, it should actually not really matter, and you can either test for all output records (ie, all intermediate results), or put all output records into a key-value Map and only test for the last emitted record per key (if you don't care about the intermediate results) in the test.
Furthermore, you could use suppress() operator to get fine grained control over what output messages you get. suppress()—in contrast to caching—is fully deterministic and thus writing a unit test works well. However, note that suppress() is event-time driven, and thus, if you stop sending new records, time does not advance and suppress() does not emit data. For unit testing, this is important to consider, because you might need to send some additional "dummy" data to trigger the output you actually want to test for. For more details on suppress() check out this blog post: https://www.confluent.io/blog/kafka-streams-take-on-watermarks-and-triggers
Update: I didn't spot the line in the example code that refers to the TopologyTestDriver in Kafka Streams. My answer below is for the 'normal' KStreams application behavior, whereas the TopologyTestDriver behaves differently. See the answer by Matthias J. Sax for the latter.
This is expected behavior. Somewhat simplified, Kafka Streams emits by default a new output record as soon as a new input record was received.
When you are aggregating (here: counting) the input data, then the aggregation result will be updated (and thus a new output record produced) as soon as new input was received for the aggregation.
input record 1 ---> new output record with count=1
input record 2 ---> new output record with count=2
...
input record 5 ---> new output record with count=5
What to do about it: You can reduce the number of 'intermediate' outputs through configuring the size of the so-called record caches as well as the setting of the commit.interval.ms parameter. See Memory Management. However, how much reduction you will be seeing depends not only on these settings but also on the characteristics of your input data, and because of that the extent of the reduction may also vary over time (think: could be 90% in the first hour of data, 76% in the second hour of data, etc.). That is, the reduction process is deterministic but from the resulting reduction amount is difficult to predict from the outside.
Note: When doing windowed aggregations (like windowed counts) you can also use the Suppress() API so that the number of intermediate updates is not only reduced, but there will only ever be a single output per window. However, in your use case/code you the aggregation is not windowed, so cannot use the Suppress API.
To help you understand why the setup is this way: You must keep in mind that a streaming system generally operates on unbounded streams of data, which means the system doesn't know 'when it has received all the input data'. So even the term 'intermediate outputs' is actually misleading: at the time the second input record was received, for example, the system believes that the result of the (non-windowed) aggregation is '2' -- its the correct result to the best of its knowledge at this point in time. It cannot predict whether (or when) another input record might arrive.
For windowed aggregations (where Suppress is supported) this is a bit easier, because the window size defines a boundary for the input data of a given window. Here, the Suppress() API allows you to make a trade-off decision between better latency but with multiple outputs per window (default behavior, Suppress disabled) and longer latency but you'll get only a single output per window (Suppress enabled). In the latter case, if you have 1h windows, you will not see any output for a given window until 1h later, so to speak. For some use cases this is acceptable, for others it is not.

How does veins calculate RSSI in a Simple Path Loss Model?

We are working on an application based on Veins framework which needs RSSI value of received signal and the distance between sender and receiver.
We referred to the VeReMi project which also calculates RSSI value and sends it to upper level.
We compared our simulation result (RSSI vs Distance) with the VeReMi dataset and they look quite different. Can you help us to explain how RSSI is calculated and whether our result is normal?
In our application, we obtain the distance and rssi value by
auto distance = sender.getPosition().distance(receiverPos);
auto senderRSSI = sender.getRssi();
In the lower level, the rssi is set in the Decider80211p::processSignalEnd(AirFrame* msg) method as in the VeReMi project.
if (result->isSignalCorrect()) {
DBG_D11P << "packet was received correctly, it is now handed to upper layer...\n";
// go on with processing this AirFrame, send it to the Mac-Layer
WaveShortMessage* decap = dynamic_cast<WaveShortMessage*>(static_cast<Mac80211Pkt*>(frame->decapsulate())->decapsulate());
simtime_t start = frame->getSignal().getReceptionStart();
simtime_t end = frame->getSignal().getReceptionEnd();
double rssiValue = calcChannelSenseRSSI(start, end);
decap->setRSSI(rssiValue);
phy->sendUp(frame, result);
}
Regarding the simulation configuration, our config.xml differs from VeReMi and there is no the following lines in our case.
<AnalogueModel type="VehicleObstacleShadowing">
<parameter name="carrierFrequency" type="double" value="5.890e+9"/>
</AnalogueModel>.
The 11p specific parameters and NIP settings in the omnetpp.ini are the same.
Also, our simulation is based on Boston map.
The scatter plot of our simulation result of RSSI_vs_Distance is shown in the following figure.
RSSI vs Distance from our simulation shows that even at distance beyond 1000 meters we still have received signal with strong RSSI values
In comparison, we extract data from VeReMi dataset and plot the RSSI vs Distance which is shown in following pic.
VeReMi dataset RSSI vs Distance is what we were expecting where RSSI decreases as distance increases
Can you help us explain whether our result is normal and what may cause the issue we have now? Thanks!
I am not familiar with the VeReMi project, so I do not know what value it is referring to as "the RSSI" when a frame is received. The accompanying ArXiV paper paper mentions no more details than that "the RSSI of the receiver" is logged on frame receptions.
Cursory inspection of the code for logging the dataset you mentioned shows that, on every reception of a frame, a method is called that sums up the power levels of all transmissions currently present at the receiver.
From this, it appears quite straightforward that (a) how far a frame traveled when it arrives at the receiver has only little relation to (b) the total amount of power experienced by the receiver at this time.
If you are interested in the Received Signal Strength (RSS) of every frame received, there is a much simpler path you can follow: Taking Veins version 5 alpha 1 as an example, your application layer can access the ControlInfo of a frame and, from there, its RSS, e.g., as follows:
check_and_cast<DeciderResult80211*>(check_and_cast<PhyToMacControlInfo*>(wsm->getControlInfo())->getDeciderResult())->getRecvPower_dBm(). The same approach should work for Veins 4.6 (which, I believe, the VeReMi dataset you are referring to is based) as well.
In simulations that only use SimplePathlossModel, Veins' version of a free space path loss model, this will result in the familiar curve:

Where exactly should I place the code for statistics in veins-lte example [duplicate]

I'm trying to calculate end-to-end delay for SimpleServerApp in Veins-LTE and I'm unable to get any results, when I open the result file all the statistics related to the delay are 0 or NaN.
I looked in the Tic-Toc tutorial and tried to do something like that, but that way I didn't even get the statistics:
On the module:
delayVector.record(delay);
delayHist.collect(delay);
and when calling finish():
delayHist.recordAs("delayFinish");
where
simtime_t delay;
cOutVector delayVector;
cLongHistogram delayHist;
Then I tried to copy the procedure from other statistic recording, but I think that can't be used in my case, because I want to send a long:
On the NED file:
#signal[delay](type="long");
#statistic[delay](title="delay"; source="delay"; record=vector, stats, histogram);
On the module:
emit(delay,delay); //where the first delay is the signal and the second one, the value.
That's what I do to calculate the delay:
On the sending module:
msg->setSendingTime();
On the receiving module:
simtime_t delay = simTime() - msg->getSendingTime();
I'd appreciate any help!
Since version 4.1 OMNeT++ introduced the concept of statistics/metrics collection and recording using the signal mechanisms.
In brief, the signal mechanism works as follows: a given value is attached to a (built-in object of type) signal and this information is recorded to output files (either as scalars or as vectors) which later on can be analyzed in order to infer certain behavior.
If you don't understand really how this mechanism works please make sure to first read the following sections of the OMNeT++ manual:
4.15 Signal-Based Statistics Recording
12 Result Recording and Analysis
Once you wrap your head around these concepts you will feel more comfortable to get what you want in terms of output results.
As far as your question is concerned if you want to use the signalling mechanisms in the SimpleServerApp you will first have to declare the signals and the corresponding statistic in the .ned file:
#signal[nameOfSignal](type="sameAsTypeOfVariable");
#statistic[nameOfStatistic](title="nameToAppearInTheOutputFile"; source="nameOfTheSourceOfThisStatistic"; record=typeOfStat1, typeOfStat2, typeOfStat2);
Then you need to declare the signal variable in .h:
simsignal_t nameOfMetricSignal;
Then register the signal in the initialize() in .cc same as the name that you used in the .ned for the signal:
nameOfMetricSignal = registerSignal("nameOfSignal");
Lastly, all you have to do is emit() the signal. That is, attach the value to the signal and let it be recorded. The location where you want to do that depends on your implementation.
emit(nameOfMetricSignal, theVariableToBeAttached);
For you that would be something like:
NED:
#signal[delay](type="float");
#statistic[delay](title="delay"; source="delay"; record=mean, sum, stats, vector);
.h
simsignal_t delaySignal;
.cc
delaySignal = registerSignal("delay");
.cc
emit(delaySignal, delay);
If you are getting 0 or Nan that could be due to division by wrong number, wrong type of signal compared to the type of the variable. Also, make sure vector and scalar recording is not turned off (false) in the omnetpp.ini

How to store different timestamps of packets in OMNeT++

I am new in OMNeT++. I am doing a simple simulation where a client sends some packets to a server. I want, for instance, store the timestamp of the first packet sent, and later, I want store the timestamp of the tenth package sent. I would want to be able to store those two timestamps in two variables, timestamp_of_first_packet and timestamp_of_last_packet, kind of like
packets_sent = 1
cPacket* testPacket = new cPacket();
double timestamp_of_first_packet = testPacket->getTimestamp().dbl();
packets_sent++;
...
double timestamp_of_last_packet = testPacket->getTimestamp().dbl();
The aim is to calculate a time interval between the two packets, with this formula:
double time_interval = timestamp_of_last_packet - timestamp_of_first_packet;
I know that this method is wrong, because both variables store the same value.
How I can store both timestamps correctly? Thanks in advance.
You can get the current simulation time by calling simTime(). If you want some time to pass in your simulation, have your module schedule an event to itself (using scheduleAt). Remember your module is written in C++, so you can use all its features (like member variables) to write clean code.