Modify dependent variable in stiff solver (vode) - fortran

I am using the dvode ODE solver from netlib to solve a stiff sparse system (the application is atmospheric chemistry). On the first call of the subroutine dvode completes a set of initialisation tasks, and takes the array of initial value of the dependent variable y as input. In subsequent calls, the routine performs the actual integration and the array y is used as output only.
For various reasons, I need to modify one element of the dependent array y during the integration. As y is used as output for all but the first call to dvode, modifications to the input values of y are ignored. It appears the relevant data are stored in a workspace array.
Is there any way to coerce dvode to let me change the value of the dependent array during the integration? I don't want to mess with the internals of the solver, and if possible I want to avoid altering the workspace arrays, since there may be all kinds of dependencies that will be difficult to foresee. I have tried alternating between initialisation and integration calls, but this makes things much slower.
If there is no clear solution, I would also consider trying another (Fortran-compatible) solver for stiff, highly non-linear ODEs.

Related

Ut PL SQL to test stored procedure

I'm new to UT PLSQL. We have a existing application which contains of lots of stored procedures. Most of the procedures at the end insert or update values to tables. Is there any way in utplsql to test these table values?? I can see lot of examples on functions alone than stored procedures.
Thank you
Test the data
In your unit test you can test more than just function results. After executing your stored procedure, you can just query your table and see if it inserted what you expected it would insert.
Depending on the stored proc it might be hard to find exactly which data it inserted, but in many cases you'll be able to do that, either because you can search for specific values, use a sequence to get the inserted ID, and so on.
To compare the data with your expectation, you select the data into variables and compare those against expected values (you could do that in a cursor loop if you need to compare multiple rows), but it may be easier to compare two cursors, one with expected data (you can construct this using select from dual), and one with the actual data.
The documentation, and especially the chapter Advanced data comparison, contains various examples on how to compare cursor data. I'm not gonna paste them here, because I don't know which one applies to your case, and both utPLSQL and its documentation are very much alive, so, it's best to check out the latest version when you need it.
Refactor your proc into a package
Nevertheless, you may find that it's hard to test big, complicated stored procs by the data they output. I've found that the easiest way to refactor this, is to create a package. In the package you can expose a procedure just like the one you have now, but it can call other procedures and functions in the package, which you can also expose. That way, it's easier to test those individual parts, and maybe you can test a big part of the logic without needing to write data, making the tests easier to write and faster to execute.
It's not completely elegant, since you're exposing parts, just for the purpose of testing, that you otherwise wouldn't expose. Nevertheless, I found it's typically really easy to refactor a stored proc into a package, especially if you already used sub-procedures in the stored proc, and this way you can quickly, and without much risk get to a structure that is easy to test.
It does't have to be in packages, you could split it up it separate smaller procedures as well, but I like packages, because they keeps all the logic of the stored proc together, and it allows you to call the proc in roughly the same way as you would before. A package is little more than a grouped set of stored procedures, functions and types. If your application would require it, you can even keep the original stored proc, but let it call its counterpart in the package, that way you got your refactoring without needing to change any of the clients.
Refactor parts of your proc into an object type
If you go a step further, you can make object types. There are various advantages to that, but they work quite different from packages, so if you're not familiar with them, this might be a big step.
First of all, the objects can keep a state, and you can have multiple of those, if you need to. Packages can hold state as well, but only one per session or per call to the database. Object types allow you to create as many instances as you need, and each keeps its own state.
With object types, you have instances of objects that you can pass around. That means you can inject a bit of logic into a stored procedure, by passing it an object of a certain type. Moreover you can make subtypes of object, so if your procedure would not write data to the table, but instead calls a method of some type X that does the actual saving, you can do the test using a subtype Y of type X, that doesn't actually save the data, but just helps you verify if the method was called with the right parameters. You're then getting into the area of mocking, which is a very useful tool to make tests more efficient.
Again a client may not be ready to pass objects like this, so I tend to create two (or more) package procedures. One is the official entry point for the application. It won't do much, except create an object of type X and pass it to the second procedure, that contains the actual logic (optionally split up further). This way, my application can call a simple stored proc, while my tests can call the second stored procedure and pass it an instance of subtype Y if needed.

Fftw3 library and plans reuse

I'm about to use fftw3 library in my very certain task.
I have a heavy load packets stream with variable frame size, which is produced like that:
while(thereIsStillData){
copyDataToInputArray();
createFFTWPlan();
performExecution();
destroyPlan();
}
Since creating plans is rather expensive, I want to modify my code to something like this:
while(thereIsStillData){
if(inputArraySizeDiffers()) destroyOldAndCreateNewPlan();
copyDataToInputArray(); // e.g. `memcpy` or `std::copy`;
performExecution();
}
Can I do this? I mean, does plan contain some important information based on data such, that plan created for one array with size N, when executed will give incorrect results for the other array of same size N.
The fftw_execute() function does not modify the plan presented to it, and can be called multiple times with the same plan. Note, however, that the plan contains pointers to the input and output arrays, so if copyDataToInputArray() involves creating a different input (or output) array then you cannot afterwards use the old plan in fftw_execute() to transform the new data.
FFTW does, however, have a set of "New-array Execute Functions" that could help here, supposing that the new arrays satisfy some additional similarity criteria with respect to the old (see linked docs for details).
The docs do recommend:
If you are tempted to use the new-array execute interface because you want to transform a known bunch of arrays of the same size, you should probably go use the advanced interface instead
but that's talking about transforming multiple arrays that are all in memory simultaneously, and arranged in a regular manner.
Note, too, that if your variable frame size is not too variable -- that is, if it is always one of a relatively small number of choices -- then you could consider keeping a separate plan in memory for each frame size instead of recomputing a plan every time one frame's size differs from the previous one's.

Use value from Input Port in Parameter of block - Simulink

I have a simulink model that I plan on converting to C code and using elsewhere. I have defined 'input ports' in order to set variables in the simulink model.
I am trying to find a way to use the input variables as part of a State Space block but have tried everything and not sure how else to go about it.
As mentioned this will be converted to C/C++ code so there is no option to use matlab in anyway.
Say I use matrix A in the state-space block parameter. Matrix A is defined liek so A= [Input1 0; Input2 0; 0 Input3]
I want to be able to change the values of the inputs through the code by setting the values of Input1 2 3 etc.
There is a very clear distinction in Simulink between Parameters and Signals. A parameter is something entered into a dialog, while a signal is something fed into or coming out of a block.
The matrices in the State-Space block are defined as parameters, and hence you will never be able to feed your signals into them.
You have two options.
Don't use the State-Space block. Rather develop the state-space model yourself using more fundamental blocks (i.e. integrators, sums and product blocks). This is feasible for small models, but not really recommended.
Note that the Parameters of a block a typically tunable. When you generate code, one of the files will be model_name_data.c and this will contain a parameter structure allowing you to change, the parameters.
Note that in either case, merely from a model design perspective, it'll be up to you to ensure that the changes to the model make sense (for instance don't make any loop, etc. go unstable).
You can not tune the parameter after generating code, because it is inlined with a constant value, this is typically done because it results in the fastest code. To have full control over the behaviour, you have to use tunable parameters. There is a table with different code versions, depending on what you want you can choose the right type of parameter.
Another lazy way to achieve this in many cases is using base workspace variables, very simple to achieve and works fine in the most cases.

Using Simulink Coder - atomic change of multidimensional parameters (matrix, vector)

I am using Simulink and Simulink Coder to generate a dll of arbitrary Models. My C Application uses the mathworks CAPI.
It runs arbitrary models (hard real time below 1 ms) and is able to modify any parameters of the model (by using the tunable parameters).
For simlpe scalar values I am obtaining the adress of the value.
Pseudocode:
void* simplegain = rtwCAPI_GetSignalAddrIdx()
*simplegain=42;
Everything runs fine. However, this approach can not be applied if I want an atomic change of complete vector and matrix.
For multidimensional Data I used memcopy to write all values from a destination to the result of GetSignalAddIdx(). Measurements have shown that using memcopy is to slow.
Analysing the generated Code show various calls of rt_Lookup
real_T rt_Lookup(const real_T *x, int_T xlen, real_T u, const real_T *y)
// x is the pointer the matrix The Adress of the matrix is declared in a global structure `rtDataAddrMap` statically. I can read it out, but do not know how to change.
What I like to achieve is:
Define a second map in my application (same size).
Write all new value the this second map.
Change just the pointer in rtDataAddrMap to activate the second
map.
The general question:
How can I achieve to change multidimensional parameters atomically?
What is the regular way to do this? (Code Generation Options etc..)
The specific question: (if my approach was right)
What are reasonable solutions to change the data pointer of a matrix?
Atomic in the sense of calling an instruction which does its work in a single clock cycle (and thus not possible to interrupt) is not possible to achieve when it comes to this kind of multidimensional arrays. Instead you will need some kind of real time mechanism like a mutex or semaphore to protect your data. Mutexes and semaphores are built upon atomic operations which guarantees that two processes will not be able to consume the same resource at once.
Your approach with ping pong buffering of your data area will probably improve performance. Unfortunately I do not have enough experience from Mathworks generated code to tell how to implement that.

Storing collections of items that an algorithm might need

I have a class MyClass that stores a collection of PixelDescriptor* objects. MyClass uses a function specified by a Strategy pattern style object (call it DescriptorFunction) to do something for each descriptor:
void MyFunction()
{
descriptorA = DescriptorCollection[0];
for_each descriptor in DescriptorCollection
{
DescriptorFunction->DoSomething(descriptor)
}
}
However, this only makes sense if the descriptors are of a type that the DescriptorFunction knows how to handle. That is, not all DescriptorFunction's know how to handle all descriptor types, but as long as the descriptors that are stored are of the type that the visitor that is specified knows about, all is well.
How would you ensure the right type of descriptors are computed? Even worse, what if the strategy object needs more than one type of descriptor?
I was thinking about making a composite descriptor type, something like:
class CompositeDescriptor
{
std::vector<PixelDescriptor*> Descriptors;
}
Then a CompositeDescriptor could be passed to the DescriptorFunction. But again, how would I ensure that the correct descriptors are present in the CompositeDescriptor?
As a more concrete example, say one descriptor is Color and another is Intensity. One Strategy may be to average Colors. Another strategy may be to average Intensities. A third strategy may be to pick the larger of the average color or the average intensity.
I've thought about having another Strategy style class called DescriptorCreator that the client would be responsible for setting up. If a ColorDescriptorCreator was provided, then the ColorDescriptorFunction would have everything it needs. But making the client responsible for getting this pairing correct seems like a bad idea.
Any thoughts/suggestions?
EDIT: In response to Tom's comments, a bit more information:
Essentially DescriptorFunction is comparing two pixels in an image. These comparisons can be done in many ways (besides just finding the absolute difference between the pixels themseles). For example 1) Compute the average of corresponding pixels in regions around the pixels (centered at the pixels). 2) Compute a fancier "descriptor" which typically produces a vector at each pixel and average the difference of the two vectors element-wise. 3) compare 3D points corresponding to the pixel locations in external data, etc etc.
I've run into two problems.
1) I don't want to compute everything inside the strategy (if the strategy just took the 2 pixels to compare as arguments) because then the strategy has to store lots of other data (the image, there is a mask involved describing some invalid regions, etc etc) and I don't think it should be responsible for that.
2) Some of these things are expensive to compute. I have to do this millions of times (the pixels being compared are always difference, but the features at each pixel do not change), so I don't want to compute any feature more than once. For example, consider the strategy function compares the fancy descriptors. In each iteration, one pixels is compared to all other pixels. This means in the second iteration, all of the features would have to be computed again, which is extremely redundant. This data needs to be stored somewhere that all of the strategies can access it - this is why I was trying to keep a vector in the main client.
Does this clarify the problem? Thanks for the help so far!
The first part sounds like a visitor pattern could be appropriate. A visitor can ignore any types it doesn't want to handle.
If they require more than one descriptor type, then it is a different abstraction entirely. Your description is somewhat vague, so it's hard to say exactly what to do. I suspect that you are over thinking it. I say that because generally choosing arguments for an algorithm is a high level concern.
I think the best advice is to try it out.
I would write each algorithm with the concrete arguments (or stubs if its well understood). Write code to translate the collection into the concrete arguments. If there is an abstraction to be made, it should be apparent while writing your translations. Writing a few choice helper functions for these translations is usually the bulk of the work.
Giving a concrete example of the descriptors and a few example declarations might give enough information to offer more guidance.