solve a discontinuous ODE - pyomo

I am a newcomer in PyOmo and I want to simulate an hybrid system where we have an ODE, i.e.,
C1[t] ==0 for t<=td
dC1[t] == -m.k*m.C1[t] for t>td
where td is parameter
How to program this in pyOmo? Any tips are welcome

Related

Internal error when using MPI Intel library with reduction operation on communicators

I am having some issues when using reduction operations on MPI communicators.
I have a lots of different communicators created using the algorithm this way :
MPI_ERR_SONDAGE(MPI_Group_incl(world_group, comm_size, &(on_going_communicator[0]), &local_group));
MPI_ERR_SONDAGE(MPI_Comm_create_group(MPI_COMM_WORLD, local_group, tag, &communicator)); tag++;
When I call a reduction operation like so :
MPI_ERR_SONDAGE(MPI_Allreduce(&(temporary[0]), &(temporary_glo[0]), (int)lignes.size(), MPI_DOUBLE, MPI_MAX, communicator));
I get
Assertion failed in file ../../src/mpid/ch4/src/intel/ch4_shm_coll.c
at line 2266: comm->shm_numa_layout[my_numa_node].base_addr
/.../oneapi/2021.4.0/mpi/2021.4.0/lib/release/libmpi.so.12(MPL_backtrace_show+0x1c) [0x2ace34033c8c]
/Cci/Admin/oneapi/2021.4.0/mpi/2021.4.0/lib/release/libmpi.so.12(MPIR_Assert_fail+0x21)
[0x2ace33aaffe1]
/.../oneapi/2021.4.0/mpi/2021.4.0/lib/release/libmpi.so.12(+0x24f609)
[0x2ace337c6609]
/.../oneapi/2021.4.0/mpi/2021.4.0/lib/release/libmpi.so.12(+0x19b518)
[0x2ace33712518]
/.../oneapi/2021.4.0/mpi/2021.4.0/lib/release/libmpi.so.12(+0x1686aa)
[0x2ace336df6aa]
/.../oneapi/2021.4.0/mpi/2021.4.0/lib/release/libmpi.so.12(+0x251ac7)
[0x2ace337c8ac7]
/Cci/Admin/oneapi/2021.4.0/mpi/2021.4.0/lib/release/libmpi.so.12(PMPI_Allreduce+0x562)
[0x2ace33685712]
I only have this problem on big test cases. Meaning lots of communicators with a reasonnable amount of data to reduce. So I cannot create a MCVE, sorry.
When I set the environment variables I_MPI_COLL_DIRECT=off and I_MPI_COLL_INTRANODE=pt2pt, the code works fine. Since I guess the problem is induced by the use of NUMA and I guess forcing point to point communication will inhibit the use of NUMA.
But my fear is that these options will lead to degraded performance, so I really would like to know the bottom problem.
I have tried with :
intel/2020.1.217
intel/2020.2.254
intel/2021.4.0
And they basically show the same error.
Could you tell me or give me a hint of what is going on ?
Thank you.

Cannot run Fortran code

My program runs with errors and I really have no idea why, will you help me please? The question is as below:
The flow problem is classic example of viscous diffusion. The governing equation for such problem was derived using boundary layer theory to reduce the full Navier- Stokes equations to the single parabolic PDE.
with the necessary initial and boundary conditions,
t = 0: u(0) = 0, u(0.04m) = 0;
t > 0: u(0) = 40.0, u(0.04m) = U = 0.0m/s.
This problem may be described physically as transient viscous-driven flow between two plates of infinite extent and separated by a distance of 0.04m. Initially both plates are at rest. After time,t=0, the upper plate is set in motion in the positive x-direction with a velocity of 40.0m/s. Due to the viscosity of the fluid filling the space between the plates, successive lamina of fluid are set in motion as time elapses. Eventually, the system reaches a “quasi-steady state”, as the velocity profile becomes more or less constant in time. The governing equation lends nicely to the use of finite difference techniques to solve the problem in the transient domain.
Computer Code in Fortran for DUFORT FRANKEL SCHEME.
! Homework1 DUFORT FRANKE SCHEME
! Program computes the numerical solution to the
! Transient Flow Problem.
! The following initial and bounadry conditions are applied:
! t=0: u(y=0)=40.0m/s
! t>0: u(y=0)=0.0; u(y=0.04m)=0.0
parameter(maxn=30,eps=1.0e-3)
integer k,m,mm,count
real*8 u_old(1001,maxn),u_new(1001,maxn),y(maxn)
real*8 t,tau,h,r,tmax,u_init,nu,sum,error
!
data h,m,u_init,nu,r,tmax /0.001,41,0.0,2.17e-4,0.217,2.5e5/
!
open(unit=1,file='hw1_dufort.out',status='unknown')
tau=r*h**2/nu
mm=m-1
error=1.0
!
count=0
k=1
t=0.0
y(1)=0.0
!
do 2 i=2,m
y(i)=y(i-1)+h
2 continue
!
write(1,*)'Velocity Results:'
write(1,10)t,(u_old(k,j),j=1,m)
do while ((error.gt.eps).and.(count.lt.1080))
count=count+1
sum=0.0
t=t+tau
u_old(k,1)=40.0
u_old(k,m)=0.0
do 4 i=2,mm
if (k.lt.2) then
u_new(k,i)=(2.0*r/(1.0+2.0*r))*(u_old(k,i+1)+u_old(k,i-1))
else
u_new(k,i)=(2.0*r/(1.0+2.0*r))*(u_old(k,i+1)+u_old(k,i-1))+ ((1.0 -
& 2.0*r)/(1.0+2.0*r))*u_old(k-1,i)
end if
end do
!
10 format(2x,f10.3,2x,41f8.4)
write(1,'(" Number of steps for convergence = ",i4)')count
end

Platform independent delay timer

Problem
I originally posted this question which was apparently something that did not meet my customer spec. Hence I am redefining the problem:
To understand the problem a bit more, the timing diagram on the original post can be used. The delayer needs to be platform independent. To be precise, I run a job scheduler and apparently my current delayer is not going to be compatible with it. What I am stuck with is the "Independent" bit of the delayer. I have already knocked out a delayer in SIMULINK using Probe (probes for Sampling Time) and Variable Integer Delay blocks. However, during our acceptance phase we realised that the scheduler does not comply with such configuration and needs to be something more intrinsic and basic - something like a while loop running in C/C++ application.
Initial Solution
What I can think of a solution is the following:
Define a global and static time-slice variable called tslc. Basically, this is how often the scheduler runs. The unit could be in seconds
Define a function that has the following body:
void hold_for_secs(float* tslc, float* _delay, float* _tmr, char* _flag) {
_delay[0] -= tslc[0];
if (_delay[0] < (float)(1e-5)) {
_flag[0] = '1';
} else {
_flag[0] = '0';
}
}
Users please forgive my poor function-coding skilss, but I merely tried to come up with a solution. I would really appreciate if people help me out a little bit with suggestions here!
Computing Platform
Windows 2000 server
Target computing platform
An embedded system card - something similar to a modern graphics card or sound card that goes along one of the PCI slot. We do testing on a testbed and finally implement the solution on that embedded system card.

in depth explanation of the side effects interface in clojure overtone generators

I an new to overtone/supercollider. I know how sound forms physically. However I don't understand the magic inside overtone's sound generating functions.
Let's say I have a basic sound:
(definst sin-wave [freq 440 attack 0.01 sustain 0.4 release 0.1 vol 0.4]
(* (env-gen (lin-env attack sustain release) 1 1 0 1 FREE)
(+ (sin-osc freq)
(sin-osc (* freq 2))
(sin-osc (* freq 4)))
vol))
I understand the ASR cycle of sound envelope, sin wave, frequency, volume here. They describe the amplitude of the sound over time. What I don't understand is the time. Since time is absent from the input of all functions here, how do I control stuffs like echo and other cool effects into the thing?
If I am to write my own sin-osc function, how do I specify the amplitude of my sound at specific time point? Let's say my sin-osc has to set that at 1/4 of the cycle the output reaches the peak of amplitude 1.0, what is the interface that I can code with to control it?
Without knowing this, all sound synth generators in overtone doesn't make sense to me and they look like strange functions with unknown side-effects.
Overtone does not specify the individual samples or shapes over time for each signal, it is really just an interface to the supercollider server (which defines a protocol for interaction, of which the supercollider language is the canonical client to this server, and overtone is another). For that reason, all overtone is doing behind the scenes is sending signals for how to construct a synth graph to the supercollider server. The supercollider server is the thing that is actually calculating what samples get sent to the dac, based on the definitions of the synths that are playing at any given time. That is why you are given primitive synth elements like sine oscillators and square waves and filters: these are invoked on the server to actually calculate the samples.
I got an answer from droidcore at #supercollider/Freenode IRC
d: time is really like wallclock time, it's just going by
d: the ugen knows how long each sample takes in terms of milliseconds, so it knows how much to advance its notion of time
d: so in an adsr, when you say you want an attack time of 1.0 seconds, it knows that it needs to take 44100 samples (say) to get there
d: the sampling rate is fixed and is global. it's set when you start the synthesis process
d: yeah well that's like doing a lookup in a sine wave table
d: they'll just repeatedly look up the next value in a table that
represents one cycle of the wave, and then just circle around to
the beginning when they get to the end
d: you can't really do sample-by sample logic from the SC side
d: Chuck will do that, though, if you want to experiment with it
d: time is global and it's implicit it's available to all the oscillators all the time
but internally it's not really like it's a closed form, where you say "give me the sample for this time value"
d: you say "time has advanced 5 microseconds. give me the new value"
d: it's more like a stream
d: you don't need to have random access to the oscillators values, just the next one in time sequence

How to profile my own functions in C++ and OpenGL?

Is there anything easy and simple to profile functions in C++/OpenGL? All I could find was gDEBugger. Looking through the documentation I can't find a way to do what I want. Let me explain...
As I've said in other questions, I have a game with defense towers. Currently they are just 3 but this number is configurable. I have a single draw function for all the towers (this function may call other functions, doesn't matter) and I would like to profile this single function (for 3 towers and then increase the number and profile again). Then I would like to implement display lists for the towers, do the same profiling and see if there was any benefit on using display lists for this specific situation.
What profiling tool do you recommend for such a task? If it matters, I'm coding OpenGL on Windows with Visual Studio 10. Or can this be done with gDEBugger? Any pointers?
P.S: I'm aware that display lists were removed on OpenGL 3.1, but the above is just an example.
NVidia has one, so does AMD. And for Intel.
For coarse-grained monitoring you can measure the time it takes to execute a frame from the beginning to after your buffer swap or glFlush()/glFinish():
while( running )
{
start_time = GetTimeInMS();
RenderFrame();
SwapGLBuffers();
end_time = GetTimeInMS();
cout << "Frame time (ms): " << (end_time - start_time) << end;
}