Dymola - Continuous Input for State Machine - state

While trying to use State Machines in Dymola (btw: I am an absolute newbie) I have problems to declare a sine curve as an input variable. I receive the following 1st error message (I paste only the beginning):
Continuous time parts and discrete parts don't decompose for:
_StateMachines.state1.activeReset
_StateMachines.state1.act...
and a 2nd one:
Decomposition in base clocks failed.
See the file dsmodelBaseClockDecomposition.mof.
I understand that the problem is caused by trying to use a continuous time variable, namely the sine function, as an input for a discrete block, namely the state machine.
How can I connect the sine function with the state machine?
EDIT:
My Code looks like this (I have deleted the annotations):
model ZLG3_v2 "2nd Version of ZLG3"
inner Real T_2(start=283);
Real T_ZuL(start=295);
model State1
outer output Real T_2;
equation
T_2=previous(T_2)+2;
end State1;
State1 state1;
model State3
outer output Real T_2;
equation
T_2=previous(T_2)-1;
end State3;
State3 state3;
Modelica.Blocks.Sources.Sine sine(freqHz=0.25, offset=305);
equation
//T_ZuL = 295;
T_ZuL=sine.y;
initialState(state1);
transition(
state3,
state1,T_2 <= T_ZuL,
immediate=false,
reset=true,
synchronize=false,
priority=1);
transition(
state1,
state3,T_2 > T_ZuL,
immediate=false,
priority=1,
reset=true,
synchronize=false);
end ZLG3_v2;
The two lines
//T_ZuL = 295;
T_ZuL=sine.y;
are of interest. Using the (currently not commented) equation withe sine.y the error message occurs. The other way round everything works just fine.
Thank you very much in advance and best regards.

Well the issue there is the inferred clock that you have to explicit, otherwise you are using a continous signal (sine.y) in a discrete state machine that has its own discrete clock. To sample the sin signal with the clock of the state machine, a sample block is enough:
model ZLG3_v3 "3rd Version of ZLG3"
inner Real T_2(start=283);
State1 state1;
State2 state2;
Modelica.Blocks.Logical.Greater greater;
Modelica.Blocks.Sources.RealExpression realExpression(y=T_2);
Modelica.Blocks.Sources.Sine sin(y(start=295),freqHz=0.25, offset=305);
Modelica_Synchronous.RealSignals.Sampler.Sample sample;
model State1
outer output Real T_2;
equation
T_2=previous(T_2)+2;
end State1;
model State2
outer output Real T_2;
equation
T_2=previous(T_2)-1;
end State2;
equation
transition(
state2,
state1,greater.y,immediate=true,reset=true,synchronize=false,priority=1);
transition(
state1,
state2,not greater.y,immediate=true,reset=true,synchronize=false,priority=1);
connect(realExpression.y, greater.u2);
connect(sin.y, sample.u);
connect(sample.y, greater.u1);
end ZLG3_v3;
Edit:
I noticed that there is an issue with the state Machine itself, below a snapshot of a state machine wiht the same real input signal as trigger for the transitions, that has no error checks and that simulates:

Related

How does veins calculate RSSI in a Simple Path Loss Model?

We are working on an application based on Veins framework which needs RSSI value of received signal and the distance between sender and receiver.
We referred to the VeReMi project which also calculates RSSI value and sends it to upper level.
We compared our simulation result (RSSI vs Distance) with the VeReMi dataset and they look quite different. Can you help us to explain how RSSI is calculated and whether our result is normal?
In our application, we obtain the distance and rssi value by
auto distance = sender.getPosition().distance(receiverPos);
auto senderRSSI = sender.getRssi();
In the lower level, the rssi is set in the Decider80211p::processSignalEnd(AirFrame* msg) method as in the VeReMi project.
if (result->isSignalCorrect()) {
DBG_D11P << "packet was received correctly, it is now handed to upper layer...\n";
// go on with processing this AirFrame, send it to the Mac-Layer
WaveShortMessage* decap = dynamic_cast<WaveShortMessage*>(static_cast<Mac80211Pkt*>(frame->decapsulate())->decapsulate());
simtime_t start = frame->getSignal().getReceptionStart();
simtime_t end = frame->getSignal().getReceptionEnd();
double rssiValue = calcChannelSenseRSSI(start, end);
decap->setRSSI(rssiValue);
phy->sendUp(frame, result);
}
Regarding the simulation configuration, our config.xml differs from VeReMi and there is no the following lines in our case.
<AnalogueModel type="VehicleObstacleShadowing">
<parameter name="carrierFrequency" type="double" value="5.890e+9"/>
</AnalogueModel>.
The 11p specific parameters and NIP settings in the omnetpp.ini are the same.
Also, our simulation is based on Boston map.
The scatter plot of our simulation result of RSSI_vs_Distance is shown in the following figure.
RSSI vs Distance from our simulation shows that even at distance beyond 1000 meters we still have received signal with strong RSSI values
In comparison, we extract data from VeReMi dataset and plot the RSSI vs Distance which is shown in following pic.
VeReMi dataset RSSI vs Distance is what we were expecting where RSSI decreases as distance increases
Can you help us explain whether our result is normal and what may cause the issue we have now? Thanks!
I am not familiar with the VeReMi project, so I do not know what value it is referring to as "the RSSI" when a frame is received. The accompanying ArXiV paper paper mentions no more details than that "the RSSI of the receiver" is logged on frame receptions.
Cursory inspection of the code for logging the dataset you mentioned shows that, on every reception of a frame, a method is called that sums up the power levels of all transmissions currently present at the receiver.
From this, it appears quite straightforward that (a) how far a frame traveled when it arrives at the receiver has only little relation to (b) the total amount of power experienced by the receiver at this time.
If you are interested in the Received Signal Strength (RSS) of every frame received, there is a much simpler path you can follow: Taking Veins version 5 alpha 1 as an example, your application layer can access the ControlInfo of a frame and, from there, its RSS, e.g., as follows:
check_and_cast<DeciderResult80211*>(check_and_cast<PhyToMacControlInfo*>(wsm->getControlInfo())->getDeciderResult())->getRecvPower_dBm(). The same approach should work for Veins 4.6 (which, I believe, the VeReMi dataset you are referring to is based) as well.
In simulations that only use SimplePathlossModel, Veins' version of a free space path loss model, this will result in the familiar curve:

Dealing with complex send recv message within a for loop

I am trying to parallelise a biological model in C++ with boost::mpi. It is my first attempt, and I am entirely new to the boost library (I have started from the Boost C++ Libraries book by Schaling). The model consists of grid cells and cohorts of individuals living within each grid cell. The classes are nested, such that a vector of Cohorts* belongs to a GridCell. The model runs for 1000 years, and at each time step, there is dispersal such that the cohorts of individuals move randomly between grid cells. I want to parallelise the content of the for loop, but not the loop itself as each time step depends on the state of the previous time.
I use world.send() and world.recv() to send the necessary information from one rank to another. Because sometimes there is nothing to send between ranks I use with mpi::status and world.iprobe() to make sure the code does not hang waiting for a message that was never sent (I followed this tutorial)
The first part of my code seems to work fine but I am having troubles with making sure all the sent messages have been received before moving on to the next step in the for loop. In fact, I noticed that some ranks move on to the following time step before the other ranks have had the time to send their messaages (or at least that what it looks like from the output)
I am not posting the code because it consists of several classes and it’s quite long. If interested the code is on github. I write here roughly the pseudocode. I hope this will be enough to understand the problem.
int main()
{
// initialise the GridCells and Cohorts living in them
//depending on the number of cores requested split the
//grid cells that are processed by each core evenly, and
//store the relevant grid cells in a vector of GridCell*
// start to loop through each time step
for (int k = 0; k < (burnIn+simTime); k++)
{
// calculate the survival and reproduction probabilities
// for each Cohort and the dispersal probability
// the dispersing Cohorts are sorted based on the rank of
// the destination and stored in multiple vector<Cohort*>
// I send the vector<Cohort*> with
world.send(…)
// the receiving rank gets the vector of Cohorts with:
mpi::status statuses[world.size()];
for(int st = 0; st < world.size(); st++)
{
....
if( world.iprobe(st, tagrec) )
statuses[st] = world.recv(st, tagrec, toreceive[st]);
//world.iprobe ensures that the code doesn't hang when there
// are no dispersers
}
// do some extra calculations here
//wait that all processes are received, and then the time step ends.
//This is the bit where I am stuck.
//I've seen examples with wait_all for the non-blocking isend/irecv,
// but I don't think it is applicable in my case.
//The problem is that I noticed that some ranks proceed to the next
//time step before all the other ranks have sent their messages.
}
}
I compile with
mpic++ -I/$HOME/boost_1_61_0/boost/mpi -std=c++11 -Llibdir \-lboost_mpi -lboost_serialization -lboost_locale -o out
and execute with mpirun -np 5 out, but I would like to be able to execute with a higher number of cores on an HPC cluster later on (the model will be run at the global scale, and the number of cells might depend on the grid cell size chosen by the user).
The compilers installed are g++ (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0, Open MPI: 2.1.1
The fact that you have nothing to send is an important piece of information in your scenario. You can not deduce that fact from only the absence of a message. The absence of a message only means nothing was sent yet.
Simply sending a zero-sized vector and skipping the probing is the easiest way out.
Otherwise you would probably have to change your approach radically or implement a very complex speculative execution / rollback mechanism.
Also note that the linked tutorial uses probe in a very different fashion.

Where does the error stem from in the process?

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.NUMERIC_STD.all;
entity reset40 is
Port ( CLOCK : in STD_LOGIC; --50MHz
CIKIS : out STD_LOGIC
);
end reset40;
architecture Behavioral of reset40 is
signal A:std_logic;
begin
process(CLOCK) --line20
variable fcounter: unsigned(24 downto 0);
variable counter_A:integer range 0 to 40:=0;
begin
if rising_edge (CLOCK) then
fcounter := fcounter+1;
end if;
A<=fcounter(6); --fa=fclock/2^6
if ((rising_edge (A)) and (counter_A/=40)) then
counter_A:= counter_A+1;
CIKIS<=A;
else
CIKIS<='0';
end if;
end process;
end Behavioral;
ERROR:Xst:827 - "C:/Users/reset40/reset40.vhd" line 20: Signal CIKIS
cannot be synthesized, bad synchronous description. The description
style you are using to describe a synchronous element (register,
memory, etc.) is not supported in the current software release.
What is the error about clock? How come is it a 'bad synchronous description'?
Your basic mistake is that all the code that is in the process needs to run only on the rising edge of the clock. This is because the target chip only has flipflops which can respond on one edge.
The process you have written is sensitive to changes in clock due to this line:
process(clock)
so it is sensitive to both clock edges.
You have then put part of the code inside an
if rising_edge(clock) then
which limits activity to just the rising edge, which is good. The rest of your code is unfortunately outside of that if clause, and so is described as operating on every edge of the clock, either rising or falling. In simulation this will be fine, but the synthesiser cannot find any flipflops which can do that, hence the error.
Other things to note - you need to reset or initialise your counter somehow. Either with a reset signal, so that any time the reset signal is high, the counter is set back to all zeros. Or using an initialisation statement.
You can get away without them for pure synthesis, as the sythesiser will (in the absence of init or reset code) automatically set the counter to all zeros when the FPGA starts up. It's poor-form though as:
it won't work in simulation and simulation is vital. You may not have realised this yet, but you will eventually!
if you port the code to another architecture which cannot magically initialise things, you wil get random 1s and 0s in the counter at startup. This may or may not matter, but it's best to avoid having to decide.
Besides the bad synchronous description there is also bad coding style :)
First off all you should not use variables in a process if you want to describe a counter. So use a signal declared in the architecture region.
2.
Decide what signal type you want to use for counters: unsigned or integer. I would advice unsigned.
3.
Signal A is not in the sensitivity list. But adding A to the sensitivity list of your process is no solution, because having two different clocks (clock and A) in one process is (a) not supported by every synthesis tool and (b) bad coding style => so use a second process for the second counter which is using a second clock.
4.
You are using A in a 'if rising_edge(...) then' statement. Synthesis tools would infer a (new) clock signal for the signal given in brackets. So your description would lead to an asynchronous description which is also bad style. A good style would be to derive a clock enable signal from fcounter(x) which enables a second counter (counter_A). counter_A is also clocked with your main clock 'clock'
5.
fcounter has no init value despite it's a register.
6.
Why has fcounter 25 bits? you are using only 7 bits.
Besides that, using bit 6 in fcounter(6) will result in a by 128 (2^7) divided clock.
Using fcounter(0) represents a toggle flip flop, which outputs f/2. fcounter(1) -> f74 and so on...
So, how should it look like?
architecture [...]
[...]
signal fcounter : unsigned(6 downto 0) := (others => '0');
signal counter_a : unsigned(5 downto 0) := (others => '0');
begin
process(clock)
bein
if rising_edge(clock) then
-- I'm using one more bits for the counter overflow, which resets the fcounter
if (fcounter(6) = '1') then
fcounter <= (others => '0');
else
fcounter <= fcounter + 1;
end if;
-- enable counter_A every 64 cycles
if (fcounter(6) = '1') then
counter_A <= counter_A + 1;
[...]
end if;
end if;
end process;
end;
But in the end there is another question: What should this module do? Do you want to create a new /64 clock or do you want to create some kind of a reset? There are other ways to generate these kinds of signals.
Reply to Mehmet's comment:
Normally a pulse train is generated by a shift register of n bits, which is reseted to all ones and the input is assigned with '0'. For short pulse trains it's a good solution, but in your case a counter is more resource efficient :)
Example for short pulse (constant) trains
input <= '0';
--input <= any_signal;
process(clock)
begin
if rising_edge(clock) then
if (reset = '1') then
pulse_train <= (others => '1');
else
pulse_train <= pulse_train(pulse_train'high - 1 downto 0) & input;
end if;
end if;
end process;
output <= pulse_train(pulse_train'high);
Ok, in the case of supplying an external low frequency IC with a derived clock, it's ok to use a counter. If the counter can't supply the recommended frequency and duty cycle, you can use a counter to produce f_out*2 and feed this signal through a toggle flip flop. The T-FF restores the duty cycle to 50/50 and divides the clock by two to f_out.

in depth explanation of the side effects interface in clojure overtone generators

I an new to overtone/supercollider. I know how sound forms physically. However I don't understand the magic inside overtone's sound generating functions.
Let's say I have a basic sound:
(definst sin-wave [freq 440 attack 0.01 sustain 0.4 release 0.1 vol 0.4]
(* (env-gen (lin-env attack sustain release) 1 1 0 1 FREE)
(+ (sin-osc freq)
(sin-osc (* freq 2))
(sin-osc (* freq 4)))
vol))
I understand the ASR cycle of sound envelope, sin wave, frequency, volume here. They describe the amplitude of the sound over time. What I don't understand is the time. Since time is absent from the input of all functions here, how do I control stuffs like echo and other cool effects into the thing?
If I am to write my own sin-osc function, how do I specify the amplitude of my sound at specific time point? Let's say my sin-osc has to set that at 1/4 of the cycle the output reaches the peak of amplitude 1.0, what is the interface that I can code with to control it?
Without knowing this, all sound synth generators in overtone doesn't make sense to me and they look like strange functions with unknown side-effects.
Overtone does not specify the individual samples or shapes over time for each signal, it is really just an interface to the supercollider server (which defines a protocol for interaction, of which the supercollider language is the canonical client to this server, and overtone is another). For that reason, all overtone is doing behind the scenes is sending signals for how to construct a synth graph to the supercollider server. The supercollider server is the thing that is actually calculating what samples get sent to the dac, based on the definitions of the synths that are playing at any given time. That is why you are given primitive synth elements like sine oscillators and square waves and filters: these are invoked on the server to actually calculate the samples.
I got an answer from droidcore at #supercollider/Freenode IRC
d: time is really like wallclock time, it's just going by
d: the ugen knows how long each sample takes in terms of milliseconds, so it knows how much to advance its notion of time
d: so in an adsr, when you say you want an attack time of 1.0 seconds, it knows that it needs to take 44100 samples (say) to get there
d: the sampling rate is fixed and is global. it's set when you start the synthesis process
d: yeah well that's like doing a lookup in a sine wave table
d: they'll just repeatedly look up the next value in a table that
represents one cycle of the wave, and then just circle around to
the beginning when they get to the end
d: you can't really do sample-by sample logic from the SC side
d: Chuck will do that, though, if you want to experiment with it
d: time is global and it's implicit it's available to all the oscillators all the time
but internally it's not really like it's a closed form, where you say "give me the sample for this time value"
d: you say "time has advanced 5 microseconds. give me the new value"
d: it's more like a stream
d: you don't need to have random access to the oscillators values, just the next one in time sequence

What can be adjusted in this simple code to make signal change in fsm

Well i have process a in my main component and process b in my other sub component(inmplemented in the main one).
both process a and b have only the clock in their sensitivity list:
process a control eneable signal called ready which if 1 process b can work , 0 process b will do nothing.
Problem is in process a , when process a changes value of enable signal to 0 , it has to take to the next clock cycle to change so process b ends up and run an extra clock cycle.
a:process(clk)
begin
if(rising_edge(clk)) then
if(output/=old_output) then
enable<='0';
end if;
end if;
end process;
b:process(clk)
begin
if(rising_edge(clk)) then
if(enable='1') then
--do anything
end if;
end if;
end process;
The reason is that the value is latched/sampled at the exact rising_edge of the clock. At that time, 'enable' is still equal to one. In that simulation delta, enabled will get the value zero, but it won't be available until AFTER the first delta.
This is also true for when enable BECOMES one (given that it is also generated on a rising clock edge), the process will latch the value exactly when clock rises, and in the simulator, enabled will look high for a whole clock period, even though "--do anything" will not happen.
You can think of this as real electrical circuits instead of a programming language. Consider that the evaluation of "output/=old_output" will consume time, and that you as a designer want that to be DONE before the next rising clock edge.
Hope this helps, but this is how the language works. I could give you a better answer if both the setting and resetting of the enable.