I want to modify some parameters of element's .ini file in OMNeT++, say a node's transmission rate, during the simulation run, e.g. when a node receives some control message.
I found information saying that it's possible to somehow loop the configuration stated as: some_variable = ${several values}, but there are no conditional clauses in .ini files and no way to pass to those files any data from C++ functions (as far as I'm concerned).
I use INET, but maybe some other models' users already bothered with such a problem.
I found information saying that it's possible to somehow loop the configuration stated as: some_variable = ${several values}, but there are no conditional clauses in .ini files and no way to pass to those files any data from C++ functions (as far as I'm concerned).
In fact you can use the built-in constraint expression in the INI file. This will allow you to create runs for the given configuration while respecting the specified constraint (condition).
However, this constraint will only apply to the parameters that are specified in the .ini file, i.e. this won't help you if the variable which you are trying to change is computed dynamically as part of the code
Below, I give you a rather complicated "code-snippet" from the .ini file which uses many of the built-in functions that you have mentioned (variable iteration, conditionals etc.)
# Parameter assignment using iteration loops and constrains #
# first define the static values on which the others depend #
scenario.node[*].application.ADVlowerBound = ${t0= 0.1}s
scenario.node[*].application.aggToServerUpperBound = ${t3= 0.9}s
#
## assign values to "dependent" parameters using variable names and loop iterations #
scenario.node[*].application.ADVupperBound = ${t1= ${t0}..${t3} step 0.1}s # ADVupperBound == t1; t1 will take values starting from t0 to t3 (0.1 - 0.9) iterating 0.1
scenario.node[*].application.CMtoCHupperBound = ${t2= ${t0}..${t3} step 0.1}s
#
## connect "dependent" parameters to their "copies" -- this part of the snippet is only variable assignment.
scenario.node[*].application.CMtoCHlowerBound = ${t11 = ${t1}}s
scenario.node[*].application.joinToServerLowerBound = ${t12 = ${t1}}s
#
scenario.node[*].application.aggToServerLowerBound = ${t21 = ${t2}}s
scenario.node[*].application.joinToServerUpperBound = ${t22 = ${t2}}s
#
constraint = ($t0) < ($t1) && ($t1) < ($t2) && ($t2) < ($t3)
# END END END #
The code above creates all the possible combinations of time values for t0 to t3, where they can take values between 0.1 and 0.9.
t0 and t3 are the beginning and the end points, respectively. t1 and t2 take values based on them.
t1 will take values between t0 and t3 each time being incremented by 0.1 (see the syntax above). The same is true for t2 too.
However, I want t0 to always be smaller than t1, t1 smaller than t2, and t2 smaller than t3. I specify these conditions in the constraint section.
I am sure, a thorough read through this section of the manual, will help you find the solution.
If you want to change some value during the simulation you can just do that in your C++ code. Something like:
handleMessage(cMessage *msg){
if(msg->getKind() == yourKind){ // replace yourKind with the one you are using for these messages
transmission_rate = new_value;
}
What you are refering to as some_variable = ${several values} can be used to perform multiple runs with different parameters. For example one run with a rate of 1s, one with 2s and one with 10s. That would then be:
transsmission_rate = ${1, 2, 10}s
For more detailed information how to use such values (like to do loops) see the relevant section in the OMNeT++ User Manual
While you can certainly manually change volatile parameters, OMNeT++ (as far as I am aware) offers no integrated support for automatic changing of parameters at runtime.
You can, however, write some model code that changes volatile parameters programatically.
Related
I have a simple FMU file which contains a sine block that takes u as input and outputs y. In this case, u is set to equal to time. In my C++ code I have loaded the FMI library from FMILibrary and had done all the necessary steps up to a point where I want to give my input u a new value of pi(as 3.14). So I went:
fmistatus = fmi2_import_set_real(fmu, &uRef, 1, &pi);
while (timeCurrent < timeEnd){
fmistatus = fmi2_import_do_step(fmu, timeCurrent , stepSize, fmi2_true);
timeCurrent += stepSize;
}
u was still set to time even though I tried to give it a new value. Did I miss something?
PS. Is there anywhere I can find a more detailed description on the FMI library functions? Currently I can only find input output descriptions or did I miss something again.
UPDATE: After a few trials, I think this issue might be because I was trying to redefine my equation u = time. In other words when I change my u variable into RealInput block in openmodelica everything goes fine. So what if I really wants to redefine a certain equation? what do I have to do?
You shall not be able to set any variable in FMI - and especially not a variable with a binding equation - and I assume your Modelica model has "u=time;". Instead of having "u=time" you need to add a top-level input without any equation (so that the exported FMI has it as an input) - and then connect that to the sine-block.
Details:
For a co-simulation FMI the restriction on what you can set are in the state-diagram in section 4.2.4 of FMI2 specification.
Between fmi2DoStep you can only set Real variables that have causality="input" or causality="parameter" and variability="tunable" - and an input with an equation doesn't qualify.
Before starting the integration you could set it for other variables as well, but that is only guess-values for the initialization - and should not over-write the "u=time" equation.
I'd like to build and train a multi-layer LSTM model (stateIsTuple=True) in python, and then load and use it in C++. But I'm having a hard time figuring out how to feed and fetch states in C++, mainly because I don't have string names which I can reference.
E.g. I put the initial state in a named scope such as
with tf.name_scope('rnn_input_state'):
self.initial_state = cell.zero_state(args.batch_size, tf.float32)
and this appears in the graph as below, but how can I feed to these in C++?
Also, how can I fetch the current state in C++? I tried the graph construction code below in python but I'm not sure if it's the right thing to do, because last_state should be a tuple of tensors, not a single tensor (though I can see that the last_state node in tensorboard is 2x2x50x128, which sounds like it just concatenated the states as I have 2 layers, 128 rnn size, 50 mini batch size, and lstm cell - with 2 state vectors).
with tf.name_scope('outputs'):
outputs, last_state = legacy_seq2seq.rnn_decoder(inputs, self.initial_state, cell, loop_function=loop if infer else None)
output = tf.reshape(tf.concat(outputs, 1), [-1, args.rnn_size], name='output')
and this is what it looks like in tensorboard
Should I concat and split the state tensors so there is only ever one state tensor going in and out? Or is there a better way?
P.S. Ideally the solution won't involve hard-coding the number of layers (or rnn size). So I can just have four strings input_node_name, output_node_name, input_state_name, output_state_name, and the rest is derived from there.
I managed to do this by manually concatenating the state into a single tensor. I'm not sure if this is wise, since this is how tensorflow used to handle states, but is now deprecating that and switching to tuple states. Instead of setting state_is_tuple=False and risking my code being obsolete soon, I've added extra ops to manually stack and unstack the states to and from a single tensor. Saying that, it works fine both in python and C++.
The key code is:
# setting up
zero_state = cell.zero_state(batch_size, tf.float32)
state_in = tf.identity(zero_state, name='state_in')
# based on https://medium.com/#erikhallstrm/using-the-tensorflow-multilayered-lstm-api-f6e7da7bbe40#.zhg4zwteg
state_per_layer_list = tf.unstack(state_in, axis=0)
state_in_tuple = tuple(
# TODO make this not hard-coded to LSTM
[tf.contrib.rnn.LSTMStateTuple(state_per_layer_list[idx][0], state_per_layer_list[idx][1])
for idx in range(num_layers)]
)
outputs, state_out_tuple = legacy_seq2seq.rnn_decoder(inputs, state_in_tuple, cell, loop_function=loop if infer else None)
state_out = tf.identity(state_out_tuple, name='state_out')
# running (training or inference)
state = sess.run('state_in:0') # zero state
loop:
feed = {'data_in:0': x, 'state_in:0': state}
[y, state] = sess.run(['data_out:0', 'state_out:0'], feed)
Here is the full code if anyone needs it
https://github.com/memo/char-rnn-tensorflow
I'm trying to split up the minimize function over two machines. On one machine, I'm calling "compute_gradients", on another I call "apply_gradients" with gradients that were sent over the network. The issue is that calling apply_gradients(...).run(feed_dict) doesn't seem to work no matter what I do. I've tried inserting placeholders in place of the tensor gradients for apply_gradients,
variables = [W_conv1, b_conv1, W_conv2, b_conv2, W_fc1, b_fc1, W_fc2, b_fc2]
loss = -tf.reduce_sum(y_ * tf.log(y_conv))
optimizer = tf.train.AdamOptimizer(1e-4)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
compute_gradients = optimizer.compute_gradients(loss, variables)
placeholder_gradients = []
for grad_var in compute_gradients:
placeholder_gradients.append((tf.placeholder('float', shape=grad_var[1].get_shape()) ,grad_var[1]))
apply_gradients = optimizer.apply_gradients(placeholder_gradients)
then later when I receive the gradients I call
feed_dict = {}
for i, grad_var in enumerate(compute_gradients):
feed_dict[placeholder_gradients[i][0]] = tf.convert_to_tensor(gradients[i])
apply_gradients.run(feed_dict=feed_dict)
However, when I do this, I get
ValueError: setting an array element with a sequence.
This is only the latest thing I've tried, I've also tried the same solution without placeholders, as well as waiting to create the apply_gradients operation until I receive the gradients, which results in non-matching graph errors.
Any help on which direction I should go with this?
Assuming that each gradients[i] is a NumPy array that you've fetched using some out-of-band mechanism, the fix is simply to remove the tf.convert_to_tensor() invocation when building feed_dict:
feed_dict = {}
for i, grad_var in enumerate(compute_gradients):
feed_dict[placeholder_gradients[i][0]] = gradients[i]
apply_gradients.run(feed_dict=feed_dict)
Each value in a feed_dict should be a NumPy array (or some other object that is trivially convertible to a NumPy array). In particular, a tf.Tensor is not a valid value for a feed_dict.
The subject line basically says it all.
If I give the location based on the file and a line number, that value can change if I edit the file. In fact it tends to change quite often and in an inconvenient way if I edit more than a single function during refactoring. However, it's less likely to change if it were (line-)relative to the beginning of a function.
In case it's not possible to give the line offset from the start of a function, then is it perhaps possible to use convenience variables to emulate it? I.e. if I would declare convenience variables that map to the start of a particular function (a list that I would keep updated)?
According to help break neither seems to be available, but I thought I'd better ask to be sure.
(gdb) help break
Set breakpoint at specified line or function.
break [PROBE_MODIFIER] [LOCATION] [thread THREADNUM] [if CONDITION]
PROBE_MODIFIER shall be present if the command is to be placed in a
probe point. Accepted values are `-probe' (for a generic, automatically
guessed probe type) or `-probe-stap' (for a SystemTap probe).
LOCATION may be a line number, function name, or "*" and an address.
If a line number is specified, break at start of code for that line.
If a function is specified, break at start of code for that function.
If an address is specified, break at that exact address.
With no LOCATION, uses current execution address of the selected
stack frame. This is useful for breaking on return to a stack frame.
THREADNUM is the number from "info threads".
CONDITION is a boolean expression.
Multiple breakpoints at one place are permitted, and useful if their
conditions are different.
Do "help breakpoints" for info on other commands dealing with breakpoints.
It's a longstanding request to add this to gdb. However, it doesn't exist right now. It's maybe sort of possible with Python, but perhaps not completely, as Python doesn't currently have access to all the breakpoint re-set events (so the breakpoint might work once but not on re-run or library load or some other inferior change).
However, the quoted text shows a nicer way -- use an probe point. These are so-called "SystemTap probe points", but in reality they're more like a generic ELF + GCC feature -- they originated from the SystemTap project but don't depend on it. These let you mark a spot in the source and easily put a breakpoint on it, regardless of other edits to the source. They are already used on linux distros to mark special spots in the unwinder and longjump runtime routines to make debugging work nicely in the presence of these.
I understand that this is an old question, but I still could not find a better solution even now in 2017. Here's a Python solution. Maybe it's not the most robust/cleanest one, but it works very well in many practical scenarios:
class RelativeFunctionBreakpoint (gdb.Breakpoint):
def __init__(self, functionName, lineOffset):
super().__init__(RelativeFunctionBreakpoint.calculate(functionName, lineOffset))
def calculate(functionName, lineOffset):
"""
Calculates an absolute breakpoint location (file:linenumber)
based on functionName and lineOffset
"""
# get info about the file and line number where the function is defined
info = gdb.execute("info line "+functionName, to_string=True)
# extract file name and line number
m = re.match(r'Line[^\d]+(\d+)[^"]+"([^"]+)', info)
if not m:
raise Exception('Failed to find function %s.' % functionName)
line = int(m.group(1))+lineOffset #add the lineOffset
fileName = m.group(2)
return "%s:%d" % (fileName, line)
USAGE:
basic:
RelativeFunctionBreakpoint("yourFunctionName", lineOffset=5)
custom breakpoint:
class YourCustomBreakpoint (RelativeFunctionBreakpoint):
def __init__(self, funcName, lineOffset, customData):
super().__init__(funcName, lineOffset)
self.customData = customData
def stop(self):
# do something
# here you can access self.customData
return False #or True if you want the execution to stop
Advantages of the solution
relatively fast, because the breakpoint is set only once, before the execution starts
robust to changes in the source file if they don't affect the function
Disadvatages
Of course, it's not robust to the edits in the function itself
Not robust to the changes in the output syntax of the info line funcName gdb command (probably there is a better way to extract the file name and line number)
other? you point out
I am using the IBM cplex optimizer to solve an optimization problem and I don't want all terminal prints that the optimizer does. Is there a member that turns this off in the IloCplex or IloModel class? These are the prints about cuts and iterations. Prints to the terminal are expensive and my problem will eventually be on the order of millions of variables and I don't want to waste time with these superfluous outputs. Thanks.
Using cplex/concert, you can completely turn off cplex's logging to the console with
cpx.setOut(env.getNullStream())
Where cpx is an IloCplex object. You can also use the setOut function to redirect logs to a file.
There are several cplex parameters to control what gets logged, for example MIPInterval will set the number of MIP nodes searched between lines. Turning MIPDisplay to 0 will turn off the display of cuts except when new solutions are found, while MIPDisplay of 5 will show detailed information about every lp-subproblem.
Logging-related parameters include MIPInterval MIPDisplay SimDisplay BarDisplay NetDisplay
You set parameters with the setParam function.
cpx.setParam(IloCplex::MIPInterval, 1000)