Assert statement in Verilog - assert

I'm completely new to Verilog, so bear with me.
I'm wondering if there is an assert statement in Verilog. In my testbench, I want to be able to assert that the outputs of modules are equal to certain values.
For example,
mymodule m(in, out);
assert(out == 1'b1);
Googling gave me a few links, but they were either too complex or didn't seem to be what I wanted.

There is an open source library for assertions called OVL. However, it's pretty heavy. One trick I nicked from there is creating a module to do assertions.
module assert(input clk, input test);
always #(posedge clk)
begin
if (test !== 1)
begin
$display("ASSERTION FAILED in %m");
$finish;
end
end
endmodule
Now, any time you want to check a signal, all you have to do is instantiate an assertion in your module, like this:
module my_cool_module(input clk, ...);
...
assert a0(.clk(clk), .test(some_signal && some_other_signal));
...
endmodule
When the assertion fails, you'll get a message like this:
ASSERTION FAILED in my_cool_module.a0
The %m in the display statement will show the entire hierarchy to the offending assertion, which is handy when you have a lot of these in a larger project.
You may wonder why I check on the edge of the clock. This is subtle, but important. If some_signal and some_other_signal in the expression above were assigned in different always blocks, it's possible the expression could be false for a brief period of time depending on the order that your Verilog simulator schedules the blocks (even though the logic was entirely valid). This would give you a false negative.
The other thing to note above is that I use !==, which will cause the assertion to fail if the test value is X or Z. If it used the normal !=, it could silently give a false positive in some cases.

Putting the above together with a macro works for me:
`define assert(signal, value) \
if (signal !== value) begin \
$display("ASSERTION FAILED in %m: signal != value"); \
$finish; \
end
Then later in my test module:
initial begin // assertions
#32 `assert(q, 16'hF0CB)
end
As an example test fail case:
ASSERTION FAILED in test_shift_register: q != 16'hF0CB

you can write like this
if(!(out==1'b1)) $finish;

If your simulator supports SystemVerilog syntax, there is an assert keyword which does what you want.

Verilog doesn't support assertions. Some tools support PSL, which places the assertions in comments but this is non-standard. You should consider using hierarchical references from a testbench instead otherwise you have to place each assertion in a process which will get messy.
The easiest way to mimic C-like assertions is probably a `define since this will make them global.
`define assert(condition) if(condition) begin $finish(1); end
In order to check signals in a non-procedural context, such as your example, you will need a different macro that builds a condition signal and then triggers a test event for that signal.
`define assert_always(condition) generate if(1) begin wire test = condition; always #(test) `assert(condition) end endgenerate
The generate above will create a new scope for the variable test so multiple instances should work.
A better way in a procedural might be to create a task in a separate file and then include that in any module declaration.
task assert(input condition);
if(!condition)
$finish(2);
endtask
For non-procedural contexts you'll need to create a module containing a process and instance that module. This will require a unique name for each instance unless you put it in a generate block.

Related

How to testing that Bazel my_test rule fails when it should?

I have found the documentation for how to test for expected analysis phase errors, but I'm drawing a blank no mater what I try to search for on how to test for expected execution phase failures.
An example of what I'm looking for would be a test of this example line_length_test rules where the test feeds in a file with over length lines and expects the test-under-test to be run and produce a failure.
Or to put it another way; I want a test that would fail if I did something dumb like this:
def _bad_test_impl(ctx):
# No-op, never fails regardless of what you feed it.
executable = ctx.actions.declare_file(ctx.label.name + ".sh")
ctx.actions.write(output=executable, content="")
return [DefaultInfo(executable=executable)]
bad_test = rule(
implementation=_bad_test_impl,
test=True,
)
Edit:
So far, the best I've come up with is the very gross:
BUILD
# Failure
bad_test(
name = "bad_test_fail_test",
tags = ["manual"],
)
native.sh_test(
name = "bad_test_failure_test",
srcs = [":not.sh"],
args = ["$(location :bad_test_fail_test)"],
data = [":bad_test_fail_test"],
)
not.sh
! $*
This does not seem like a good idea, particularly for something I'd expect to be well supported.
EDIT:
I got annoyed and built my own. I still wish there was official implementation.
I don't think that your not.sh is a particularly bad solution. But it depends on the situation. In general the Key to good failure testing is specificity i.e. this test should fail for this specific reason. If you don't get this nailed down your going to have a lot of headaches with false positives.
I'll use a slightly more expressive example to try and illustrate why it's so difficult to create a "generic/well supported" failure testing framework.
Let's say that we are developing a compiler. To test the compilers parser we intentionally feed the compiler a malformed source file, and we expect it to fail with something like a "missed semicolon on line 55". But instead our compiler fails from a fairly nasty bug that results in a segfault. As we have just tested that the compiler fails, the test passes.
This kind of a false positive is really hard to deal with in a way that is easy to reason about, whilst also being generic.
What we really want in the above scenario, is to test that the compiler fails AND it prints "missed semicolon on line 55". At this point it becomes increasingly difficult to create a "well supported" interface for these kinds of tests.
In reality a failure test is an integration test, or in some cases an end to end test.
Here is a short excerpt of an integration test from the Bazel repository. This integration test calls Bazel with a range of different flags some combinations expecting success and others failure;
function test_explicit_sandboxfs_not_found() {
create_hello_package
bazel build \
--experimental_use_sandboxfs \
--experimental_sandboxfs_path="/non-existent/sandboxfs" \
//hello >"${TEST_log}" 2>&1 && fail "Build succeeded but should have failed"
expect_log "Failed to get sandboxfs version.*/non-existent/sandboxfs"
}
And the corresponding build definition;
sh_test(
name = "sandboxfs_test",
size = "medium",
srcs = ["sandboxfs_test.sh"],
data = [":test-deps"],
tags = ["no_windows"],
)

Is there a way to specify an NUnit test as "extra credit"?

I have a few tests for an API, and I would like to be able to express certain tests that reflect "aspirational" or "extra credit" requirements - in other words, it's great if they pass, but fine if they don't. For instance:
[Test]
public void RequiredTest()
{
// our client is using positive numbers in DoThing();
int result = DoThing(1);
Assert.That( /* result is correct */ );
}
[Test]
public void OptionalTest()
{
// we do want to handle negative numbers, but our client is not yet using them
int result = DoThing(-1);
Assert.That( /* result is correct */ );
}
I know about the Ignore attribute, but I would like to be able to mark OptionalTest in such a way that it still runs on the CI server, but is fine if it does not pass - as soon as it does, I would like to take notice and perhaps make it a requirement. Is there any major unit test framework that supports this?
I would use a Warnings to achieve this. That way - your test will print a 'warning' output, but not be a failure, and not fail your CI build.
See: https://github.com/nunit/docs/wiki/Warnings
as soon as it does, I would like to take notice and perhaps make it a requirement.
This part's a slightly separate requirement! Depends a lot on how you want to 'take notice'! Consider looking at Custom Attributes - it may be possible to write an IWrapSetUpTearDown attribute, which sends an email when the relevant test passes. See the docs, here: https://github.com/nunit/docs/wiki/ICommandWrapper-Interface
The latter is a more unusual requirement - I would expect to have to do something custom to fit your needs there!

What is the oposite of EXPECT_CALL?

I have some expectations like EXPECT_CALL (...)
EXPECT_CALL(t1, foo()).Times(1);
I want to create the oposite.
I expect that a certain function won't be executed.
What is the method I should use?
Something like EXPECT_NOT_CALL (...) ?
In GTest something similar to EXPECT_NOT_CALL doesn't exist however, there are several options to receive this behaviour:
1.Create a StrictMock. In StrictMock any unexpected called cause a failure.
2.Use .Times(0):
EXPECT_CALL(t1, foo()).Times(0);
In this option you use the count mechanism however, it checks if the value is 0.(so any call leads a failure...)
3.Use method 2 and create a Macro:
#define EXPECT_NOT_CALL(a,b) EXPECT_CALL(a, b).Times(0);

Unit testing - log and then fail?

I am used to test drive my code. Now that I am new to Go I am trying to get it right as fast as possible. I am using the testing package in the standard library which seem to be good enough. (I also like that it is not yet another external dependency. We are currently at 2 dependencies overall - compared to any Java- or Ruby project.....) Anyways - it looks like an assert in golang looks like this:
func TestSomething(t *testing.T) {
something := false
if something {
t.Log("Oh noes - something is false")
t.Fail()
}
}
I find this verbose and would like to do it on one line instead:
Assert( something, "Oh noes - something is false" )
or something like that. I hope that I have missed something obvious here. What is the best/idiomatic way to do it in go?
UPDATE: just to clarify. If I was to do something like this:
func AssertTrue(t *testing.T, value bool, message string) {
if value {
t.Log(message)
t.Fail()
}
}
and then write my test like this
func TestSomething(t *testing.T) {
something := false
AssertTrue(t, something, "Oh noes - something is false")
}
then it would not be the go way to do it?
There are external packages that can be integrated with the stock testing framework.
One of them I wrote long ago, gocheck, was intended to sort that kind of use case.
With it, the test case looks like this, for example:
func (s *Suite) TestFoo(c *gocheck.C) {
// If this succeeds the world is doomed.
c.Assert("line 1\nline 2", gocheck.Equals, "line 3")
}
You'd run that as usual, with go test, and the failure in that check would be reported as:
----------------------------------------------------------------------
FAIL: foo_test.go:34: Suite.TestFoo
all_test.go:34:
// If this succeeds the world is doomed.
c.Assert("line 1\nline 2", gocheck.Equals, "line 3")
... obtained string = "" +
... "line 1\n" +
... "line 2"
... expected string = "line 3"
Note how the comment right above the code was included in the reported failure.
There are also a number of other usual features, such as suite and test-specific set up and tear down routines, and so on. Please check out the web page for more details.
It's well maintained as I and other people use it in a number of active projects, so feel free to join on board, or follow up and check out the other similar projects that suit your taste more appropriately.
For examples of gocheck use, please have a look at packages such as mgo, goyaml, goamz, pipe, vclock, juju (massive code base), lpad, gozk, goetveld, tomb, etc. Also gocheck, manages to test itself. It was quite fun to bootstrap that.
But when You try write test like Uncle Martin, with one assert in test and long function names, then simple assert library, like http://github.com/stretchr/testify/assert can make it much faster and easier
I discourage writing test in the way you seem to have desire for. It's not by chance that the whole stdlib uses the, as you call it, "verbose" way.
It is undeniably more lines, but there are several advantages to this approach.
If you read Why does Go not have assertions? and s/error handling/test failure reporting/g you can get a picture of why the several "assert" packages for Go testing are not a good idea to use,
Once again, the proof is the huge code base of the stdlib.
The idiomatic way is the way you have above. Also, you don't have to log any message if you don't desire.
As defined by the GO FAQ:
Why does Go not have assertions?
Go doesn't provide assertions. They are undeniably convenient, but our
experience has been that programmers use them as a crutch to avoid
thinking about proper error handling and reporting. Proper error
handling means that servers continue operation after non-fatal errors
instead of crashing. Proper error reporting means that errors are
direct and to the point, saving the programmer from interpreting a
large crash trace. Precise errors are particularly important when the
programmer seeing the errors is not familiar with the code.
We understand that this is a point of contention. There are many
things in the Go language and libraries that differ from modern
practices, simply because we feel it's sometimes worth trying a
different approach.
UPDATE
Based on your update, that is not idiomatic Go. What you are doing is in essence designing a test extension framework to mirror what you get in the XUnit frameworks. While there is nothing fundamentally wrong, from an engineering perspective, it does raise questions as to the value + cost of maintaining this extension library. Additionally, you are creating an in-house standard that will potentially ruffle feathers. The biggest thing about Go is it is not C or Java or C++ or Python and things should be done the way the language is constructed.

Using Lua to define NPC behaviour in a C++ game engine

I'm working on a game engine in C++ using Lua for NPC behaviour. I ran into some problems during the design.
For everything that needs more than one frame for execution I wanted to use a linked list of processes (which are C++ classes). So this:
goto(point_a)
say("Oh dear, this lawn looks really scruffy!")
mowLawn()
would create a GotoProcess object, which would have a pointer to a SayProcess object, which would have a pointer to a MowLawnProcess object. These objects would be created instantly when the NPC is spawned, no further scripting needed.
The first of these objects will be updated each frame. When it's finished, it will be deleted and the next one will be used for updating.
I extended this model by a ParallelProcess which would contain multiple processes that are updated simultaneously.
I found some serious problems. Look at this example: I want a character to walk to point_a and then go berserk and just attack anybody who comes near. The script would look like that:
goto(point_a)
while true do
character = getNearestCharacterId()
attack(character)
end
That wouldn't work at all with my design. First of all, the character variable would be set at the beginning, when the character hasn't even started walking to point_a. Then, then script would continue adding AttackProcesses forever due to the while loop.
I could implement a WhileProcess for the loop and evaluate the script line by line. I doubt this would increase readability of the code though.
Is there another common approach I didn't think of to tackle this problem?
I think the approach you give loses a lot of the advantages of using a scripting language. It will break with conditionals as well as loops.
With coroutines all you really need to do is:
npc_behaviour = coroutine.create(
function()
goto(point_a)
coroutine.yield()
say("Oh dear, this lawn looks really scruffy!")
coroutine.yield()
mowLawn()
coroutine.yield()
end
)
goto, say and mowLawn return immediately but initiate the action in C++. Once C++ completes those actions it calls coroutine.resume(npc_behaviour)
To avoid all the yields you can hide them inside the goto etc. functions, or do what I do which is have a waitFor function like:
function waitFor(id)
while activeEvents[id] ~= nil do
coroutine.yield()
end
end
activeEvents is just a Lua table which keeps track of all the things which are currently in progress - so a goto will add an ID to the table when it starts, and remove it when it finishes, and then every time an action finishes, all coroutines are activated to check if the action they're waiting for is finished.
Have you looked at Finite State Machines ? If I were you I wouldn't use a linked list but a stack. I think the end result is the same.
stack:push(action:new(goto, character, point_a))
stack:push(action:new(say, character, "Oh dear, this lawn was stomped by a mammoth!"))
stack:push(action:new(mowLawn, character))
Executing the actions sequentially would give something like :
while stack.count > 0 do -- do all actions in the stack
action = stack:peek() -- gets the action on top of the stack
while action.over ~= true do -- continue action until it is done
action:execute() -- execute is what the action actually does
end
stack:pop() -- action over, remove it and proceed to next one
end
The goto and other functions would look like this :
function goto(action, character, point)
-- INSTANT MOVE YEAH
character.x = point.x
character.y = point.y
action.over = true -- set the overlying action to be over
end
function attack(action, character, target)
-- INSTANT DEATH WOOHOO
target.hp = 0
action.over = true -- attack is a punctual action
end
function berserk(action, character)
attack(action, character, getNearestCharacterId()) -- Call the underlying attack
action.over = false -- but don't set action as done !
end
So whenever you stack:push(action:new(berserk, character)) it will loop on attacking a different target every time.
I also made you a stack and action implementation in object lua here. Haven't tried it. May be bugged like hell. Good luck with your game !
I don't know the reasons behind you design, and there might be simpler / more idiomatic ways to it.
However, would writing a custom "loop" process that would somehow take a function as it's argument do the trick ?
goto(point_a)
your_loop(function ()
character = getNearestCharacterId()
attack(character)
end)
Since Lua has closures (see here in the manual), the function could be attached to your 'LoopProcess', and you call this same function at each frame. You would probably have to implement your LoopProcess so that that it's never removed from the process list ...
If you want your loop to be able to stop, it's a bit more complicated ; you would have to pass another function containing the test logic (and again, you LoopProcess would have to call this every frame, or something).
Hoping I understood your problem ...