SystemVerilog Assertion which checks that a clock is provided when active_clk is high - assert

In the design specification that I'm veryifying the DUT against there is a requirement that the word clock and bit clock are being generated when the active_clk signal is high. I've little experience in using SVA, so was hoping that someone with a little more experience could point me in the correct direction, or better yet, provide a solution.

Have an always ON clock which you can use to predict rising/falling edges of other 2 clocks within some fixed/calculated duration. Something like below code:
bit aon_clk;
always #1 aon_clk = ~aon_clk;
property clk_chk;
#(aon_clk)
// Within say 25 Always ON Clks, you should expect a rise/fall of bit_clk
active_clk |=> ##[0:25] $rose(bit_clk) && active_clk ##[0:25] $fell(bit_clk) && active_clk;
endproperty
assert property (clk_chk) else $display($time," Clks not generated");

Related

to set a time delay in a IF statement in Qt

I've real-time signal, I need to check if the signal crosses a threshold and set a delay for about 1 second. How do I implement that?
if(a_vertical> onThreshold) //what's the expression to set a time delay?
{
ui->rdo_btn_vertical->setStyleSheet(StyleSheetOn1); // On LED
}
else
{
ui->rdo_btn_vertical->setStyleSheet(StyleSheetOff1); // Off LED
}
The easy and dumb solution is to start QTimer in single shot mode and check if it's running (should not be) while detecting the pulse. "Easy and dumb" doesn't mean "good".
Depending on the nature of the signal, filtering it might be a viable option.

Platform independent delay timer

Problem
I originally posted this question which was apparently something that did not meet my customer spec. Hence I am redefining the problem:
To understand the problem a bit more, the timing diagram on the original post can be used. The delayer needs to be platform independent. To be precise, I run a job scheduler and apparently my current delayer is not going to be compatible with it. What I am stuck with is the "Independent" bit of the delayer. I have already knocked out a delayer in SIMULINK using Probe (probes for Sampling Time) and Variable Integer Delay blocks. However, during our acceptance phase we realised that the scheduler does not comply with such configuration and needs to be something more intrinsic and basic - something like a while loop running in C/C++ application.
Initial Solution
What I can think of a solution is the following:
Define a global and static time-slice variable called tslc. Basically, this is how often the scheduler runs. The unit could be in seconds
Define a function that has the following body:
void hold_for_secs(float* tslc, float* _delay, float* _tmr, char* _flag) {
_delay[0] -= tslc[0];
if (_delay[0] < (float)(1e-5)) {
_flag[0] = '1';
} else {
_flag[0] = '0';
}
}
Users please forgive my poor function-coding skilss, but I merely tried to come up with a solution. I would really appreciate if people help me out a little bit with suggestions here!
Computing Platform
Windows 2000 server
Target computing platform
An embedded system card - something similar to a modern graphics card or sound card that goes along one of the PCI slot. We do testing on a testbed and finally implement the solution on that embedded system card.

Random scheduling of n non-overlapping events in a time interval

Please forgive me if this is a well-known class of problem with a well-known solution. I've been searching but obviously not succeeding.
Assume I have n events that must occur in an interval (e.g., [0,1]). Each event is associated with a duration that is stochastically drawn from a predefined distribution. Assume that the interval is much larger than all the event durations combined: valid schedules always exist. The probabilities of event occurrence over the [0,1] interval are not uniform, but the events are independent as long as they are not overlapping.
What is an efficient and accurate (random) way to schedule these events?
Here's my current pseudocode:
lowerBound = min(interval)
upperBound = max(interval)
for n in numEvents {
draw startTime
draw endTime
while ( startTime is less than lowerBound || endTime exceeds upperBound ) {
draw startTime
draw endTime
}
add event
reset lowerBound and upperBound to define largest remaining (event-free) interval
}
I think that by choosing the larger remaining interval in which to schedule the new event, I'm making the events overdispersed--more spaced out than they'd otherwise be. This nonetheless seems very efficient and probably extremely accurate when the number of events is small.
I'm using C++, although that's probably irrelevant.
I'd also greatly appreciate search terms, if you know what this kind of problem is called.
Context: Accuracy is most important to me here. This is not a time-intensive step in the overall program.
I'd just place each event randomly, checking for collisions, and if there's a collision wipe the slate clean and start over.
You say that the interval is much larger than the sum of the event durations, so collisions will be rare, so this method is quite fast in practice.
You say you want accurate results. I'm not sure what that means, but at a guess I'd say that all valid solutions should be equally probable; the current solution in the question doesn't satisfy that requirement, but this one does.

Is event recording on time-sensitive possible?

Basic Question
Is there any way to event recording an playback within a time-sensitive (framerate independent) system?
Any help - including a simple "No sorry it is impossible" - would be greatly appreciated. I have spent almost 20 hours working on this over the past few weekends and am driving myself crazy.
Full Details
This is being currently aimed at a game but the libraries I'm writing are designed to be more general and this concept applies to more than just my C++ coding.
I have some code that looks functionally similar this... (it is written in C++0x but I'm taking some liberties to make it more compact)
void InputThread()
{
InputAxisReturn AxisState[IA_SIZE];
while (Continue)
{
Threading()->EventWait(InputEvent);
Threading()->EventReset(InputEvent);
pInput->GetChangedAxis(AxisState);
//REF ALPHA
if (AxisState[IA_GAMEPAD_0_X].Changed)
{
X_Axis = AxisState[IA_GAMEPAD_0_X].Value;
}
}
}
And I have a separate thread that looks like this...
//REF BETA
while (Continue)
{
//Is there a message to process?
StandardWindowsPeekMessageProcessing();
//GetElapsedTime() returns a float of time in seconds since its last call
UpdateAll(LoopTimer.GetElapsedTime());
}
Now I'd like to record input events for playback for testing and some limited replay functionality.
I can easily record the events with precision timing by simply inserting the following code where I marked //REF ALPHA
//REF ALPHA
EventRecordings.pushback(EventRecording(TimeSinceRecordingBegan, AxisState));
The real issue is playing these back. My LoopTimer is extremely high precision using the High Performance Counter (QueryPreformanceCounter). This means that it is nearly impossible to hit the same time difference using code like below in place of //REF BETA
// REF BETA
NextEvent = EventRecordings.pop_back()
Time TimeSincePlaybackBegan;
while (Continue)
{
//Is there a message to process?
StandardWindowsPeekMessageProcessing();
//Did we reach the next event?
if (TimeSincePlaybackBegan >= NextEvent.TimeSinceRecordingBegan)
{
if (NextEvent.AxisState[IA_GAMEPAD_0_X].Changed)
{
X_Axis = NextEvent.AxisState[IA_GAMEPAD_0_X].Value;
}
NextEvent = EventRecordings.pop_back();
}
//GetElapsedTime() returns a float of time in seconds since its last call
Time elapsed = LoopTimer.GetElapsedTime()
UpdateAll(elapsed);
TimeSincePlabackBegan += elapsed;
}
The issue with this approach is that you will almost never hit the exact same time so you will have a few microseconds where the playback doesn't match the recording.
I also tried event snapping. Kind of a confusing term but basically if the TimeSincePlaybackBegan > NextEvent.TimeSinceRecordingBegan then TimeSincePlaybackBegan = NextEvent.TimeSinceRecordingBegan and ElapsedTime was altered to suit.
It had some interesting side effects which you would expect (like some slowdown) but it unfortunately still resulted in the playback de-synchronizing.
For some more background - and possibly a reason why my time snapping approach didn't work - I'm using BulletPhysics somewhere down that UpdateAll call. Kind of like this...
void Update(float diff)
{
static const float m_FixedTimeStep = 0.005f;
static const uint32 MaxSteps = 200;
//Updates the fps
cWorldInternal::Update(diff);
if (diff > MaxSteps * m_FixedTimeStep)
{
Warning("cBulletWorld::Update() diff > MaxTimestep. Determinism will be lost");
}
pBulletWorld->stepSimulation(diff, MaxSteps, m_FixedTimeStep);
}
But I also tried with pBulletWorkd->stepSimulation(diff, 0, 0) which according to http://www.bulletphysics.org/mediawiki-1.5.8/index.php/Stepping_the_World should have done the trick but still with no avail.
Answering my own question for anyone else who stumbles upon this.
Basically if you want deterministic recording and playback you need to lock the frame-rate. If the system cannot handle the frame-rate you must introduce slowdown or risk dsyncronization.
After two weeks of additional research I've decided it is just not possible due to floating point inadequacies and the fact that floating point numbers are not necessarily associative.
The only solution to have a deterministic engine that relies on floating point numbers is to have a stable and fixed frame-rate. Any change in the frame-rate will - across a long term - result in the playback becoming desynchronized.

ASCII DOS Games - Rendering methods

I'm writing an old school ASCII DOS-Prompt game. Honestly I'm trying to emulate ZZT to learn more about this brand of game design (Even if it is antiquated)
I'm doing well, got my full-screen text mode to work and I can create worlds and move around without problems BUT I cannot find a decent timing method for my renders.
I know my rendering and pre-rendering code is fast because if I don't add any delay()s or (clock()-renderBegin)/CLK_TCK checks from time.h the renders are blazingly fast.
I don't want to use delay() because it is to my knowledge platform specific and on top of that I can't run any code while it delays (Like user input and processing). So I decided to do something like this:
do {
if(kbhit()) {
input = getch();
processInput(input);
}
if(clock()/CLOCKS_PER_SEC-renderTimer/CLOCKS_PER_SEC > RenderInterval) {
renderTimer = clock();
render();
ballLogic();
}
}while(input != 'p');
Which should in "theory" work just fine. The problem is that when I run this code (setting the RenderInterval to 0.0333 or 30fps) I don't get ANYWHERE close to 30fps, I get more like 18 at max.
I thought maybe I'd try setting the RenderInterval to 0.0 to see if the performance kicked up... it did not. I was (with a RenderInterval of 0.0) getting at max ~18-20fps.
I though maybe since I'm continuously calling all these clock() and "divide this by that" methods I was slowing the CPU down something scary, but when I took the render and ballLogic calls out of the if statement's brackets and set RenderInterval to 0.0 I get, again, blazingly fast renders.
This doesn't make sence to me since if I left the if check in, shouldn't it run just as slow? I mean it still has to do all the calculations
BTW I'm compiling with Borland's Turbo C++ V1.01
The best gaming experience is usually achieved by synchronizing with the vertical retrace of the monitor. In addition to providing timing, this will also make the game run smoother on the screen, at least if you have a CRT monitor connected to the computer.
In 80x25 text mode, the vertical retrace (on VGA) occurs 70 times/second. I don't remember if the frequency was the same on EGA/CGA, but am pretty sure that it was 50 Hz on Hercules and MDA. By measuring the duration of, say, 20 frames, you should have a sufficiently good estimate of what frequency you are dealing with.
Let the main loop be someting like:
while (playing) {
do whatever needs to be done for this particular frame
VSync();
}
... /* snip */
/* Wait for vertical retrace */
void VSync() {
while((inp(0x3DA) & 0x08));
while(!(inp(0x3DA) & 0x08));
}
clock()-renderTimer > RenderInterval * CLOCKS_PER_SEC
would compute a bit faster, possibly even faster if you pre-compute the RenderInterval * CLOCKS_PER_SEC part.
I figured out why it wasn't rendering right away, the timer that I created is fine the problem is that the actual clock_t is only accurate to .054547XXX or so and so I could only render at 18fps. The way I would fix this is by using a more accurate clock... which is a whole other story
What about this: you are substracting from x (=clock()) y (=renderTimer). Both x and y are being divided by CLOCKS_PER_SEC:
clock()/CLOCKS_PER_SEC-renderTimer/CLOCKS_PER_SEC > RenderInterval
Wouldn't it be mor efficiente to write:
( clock() - renderTimer ) > RenderInterval
The very first problem I saw with the division was that you're not going to get a real number from it, since it happens between two long ints. The secons problem is that it is more efficiente to multiply RenderInterval * CLOCKS_PER_SEC and this way get rid of it, simplifying the operation.
Adding the brackets gives more legibility to it. And maybe by simplifying this phormula you will get easier what's going wrong.
As you've spotted with your most recent question, you're limited by CLOCKS_PER_SEC which is only about 18. You get one frame per discrete value of clock, which is why you're limited to 18fps.
You could use the screen vertical blanking interval for timing, it's traditional for games as it avoids "tearing" (where half the screen shows one frame and half shows another)