I am trying out several gradle basics.
Here is how my gradle file "build.gradle" looked:
task hello
{
doLast
{
println 'Hello World!'
}
}
This causes the following error:
D:\DevAreas\learn-gradle>gradle -q hello
FAILURE: Build failed with an exception.
* Where:
Build file 'D:\DevAreas\learn-gradle\build.gradle' line: 2
* What went wrong:
Could not compile build file 'D:\DevAreas\learn-gradle\build.gradle'.
> startup failed:
build file 'D:\DevAreas\learn-gradle\build.gradle': 2: Ambiguous expression could be a parameterle
ss closure expression, an isolated open code block, or it may continue a previous statement;
solution: Add an explicit parameter list, e.g. {it -> ...}, or force it to be treated as an open
block by giving it a label, e.g. L:{...}, and also either remove the previous newline, or add an exp
licit semicolon ';' # line 2, column 1.
{
^
1 error
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more l
og output.
If I make a minor modification to the build file like so
[Please NOTE that I have moved the parenthesis from second line to
first line]
task hello{
doLast
{
println 'Hello World!'
}
}
I see the output
Hello World!
with out issues.
Is parenthesis such a big problem in gradle? What was I doing so wrong by placing the parenthesis in the second line?
As with other languages that use semicolon inference, the newline makes a difference in Groovy. The first snippet gets parsed as task hello; { ... }, which is ambiguous (can't decide if the second statement is a block or a closure) and hence invalid Groovy syntax. It's not what you want anyway; you want the closure to be associated with the hello task. To avoid such surprises, I recommend to follow the Java braces style.
Related
I'm writing a brainfuck interpreter in NASM, where code is supplied as a command line argument to the program. I'm trying to test looping, but GDB doesn't like my input. For example, this executes error-free when run on its own:
$./interpret "+++++[->+<]"
It hangs indefinitely, but I think that that's due to a bug in the looping logic in the interpreter (thus GDB).
If I load interpret into GDB though and attempt to supply the same argument, I get complaints:
gef➤ start "+++++[->+<]"
/bin/bash: line 1: ]: No such file or directory
/bin/bash: line 1: ]: No such file or directory
This seems to be due to < being interpreted as redirection despite the quotes, since [] works fine in GDB.
I tried escaping the STDIN redirection with \<, but that leads to the same error, and <<, but that leads to a warning:
gef➤ start "+++++[->+<<]"
/bin/bash: line 1: warning: here-document at line 1 delimited by end-of-file (wanted `]')
And the code gets cut off:
$r15 : 0x00007fffffffe428 → 0x002d5b2b2b2b2b2b ("+++++[-"?)
Is there a way to have GDB take what I give literally to start, and not attempt to do any redirection/interpretation of the arguments?
Is there a way to have GDB take what I give literally to start, and not attempt to do any redirection/interpretation of the arguments?
GDB isn't doing any interpretation, bash does. Using single-quotes instead of double-quotes may fix that.
(I wasn't able to replicate the problem using GDB-10.0 and bash-5.1.4 with double quotes though.)
I am using REPL.it to run Python for my homework. When typing in and running this line of code:
# print "This will not run"
I get an unexpected EOF error:
Traceback (most recent call last):
File "python", line 1
# print "This will not run"
^
SyntaxError: unexpected EOF while parsing
This is an issue with REPL.it, not with Python. I am not sure what the internals of that interpreter are, but it appears that REPL.it will not allow a comment as the first line of code if there is no other code. To illustrate, try the following:
foo = 1
# print "This will not run"
The interpreter should spit out None instead of raising an error. It seems that it also works to have a comment on the first line and an empty line (or a line with code) as the second line, but running a file in this app that consists of only a single comment line does not seem to work.
If you have access to Python on your computer (which you do by default if you are on Mac OSX or Linux), then I would suggest trying your examples in a real Python interpreter. Otherwise, you might see some unexpected results, as I assume that repl.it is not a full-featured interpreter (as indicated by the syntax error).
It means Python is surprised that that the code ended without being finished. For your example, you didn't write any code, just a comment, without a blank line on the bottom?
Try print "This will not run" if that's the ONLY line of code in your file.
The python-interpreter is looking for code it shall execute but finds none, as the line you try to run is uncommented (by the # in the beginning).
Because it found no code to evaluate it makes some noise.
Remove the # and it will work...
I am using IAR EWARM's cspybat to run some unit tests for my embedded code using Unity. I would like an easy way for my build server to determine if the unit tests passed or failed. Is there a way for CSPY to return a nonzero error code if my unit tests fail? I have tried changing the return value in main() with no change. Is there a function I can call to force an error to be returned?
My cspybat batch file looks like this:
"C:\Program Files (x86)\IAR Systems\Embedded Workbench 7.4\common\bin\cspybat" -f "C:\Work\Sandbox\ST\stmicroeval\_iar_ewarm_project\settings\Project.UnitTest.general.xcl" --backend -f "C:\Work\Sandbox\ST\stmicroeval\_iar_ewarm_project\settings\Project.UnitTest.driver.xcl"
Unfortunately, no.
I've solved this by replacing "exit" with a function that prints a specific pattern, plus the exit code. I then wrapped the call to cspybat into a script that 1) strips the output of the extra output and 2) exits with the desired exit code.
It's late 2020 and they still don't offer a mechanism to do this.
We solved it by including a macro file with the contents:
execUserExit()
{
__message "program exited with __exit_value = ", __exit_value:%d ;
}
And having our own exit variable in the code:
extern "C" int __exit_value=0xff;
That we set prior to calling exit() (though you could just write your own version of exit())
This makes the debugger always print SOMETHING, even if the program crashes on startup.
Then we parse with a python wrapper:
pattern = "__exit_value =\s([\-|0-9|a-f|A-F|x]*)"
retvalue = int(re.findall(pattern,process.stdout)[0])
Hi I need to write a lldb breakpoint command that evaluates a value and prints out a value.
In gdb I could do it like this:
if ($value==2)
printf "Value is 2\n"
end
But in lldb the 'if-statement' is invalid it seems:
failed with error: 'if' is not a valid command.
error: Unrecognized command 'if'.
Can anyone tell me how to write this comparison inside my breakpoint command? Thanks!
You can use the expression parser to achieve this effect in some cases, and you can use the lldb Python interpreter for whatever complex work you want to do in response to a breakpoint hit. Given the fairly deep level of Python support, we felt if you don't know Python, you time would be better spent learning a little bit of that so you could really script lldb, rather than learning whatever little micro-language we would come up with.
Anyway, so using the interpreter, you could for instance do:
expr if ($value == 2) { (int) printf("Value is 2\n"); }
And using the python interpreter you can write a callback like:
def myCallback (frame, breakpoint_location, dict):
value = frame.FindValue("$value", lldb.eValueTypeConstResult)
if (value.unsigned == 10):
print "Value is 10"
put that in a file called myModule.py, do:
(lldb) command script import myModule.py
and then assign the command to your breakpoint with:
(lldb) breakpoint command add -F myModule.myCallback <BREAKPOINT_NUMBER>
That python example was a little more complex than normal because you were looking up lldb's equivalent of gdb's "convenience variable". If you were looking up a local, you could use frame.FindVariable.
More details on this at:
http://lldb.llvm.org/python-reference.html
I've written a command line tool that I want to test (I'm not looking to run unit tests from command line). I want to map a specific set of input options to a specific output. I haven't been able to find any existing tools for this. The application is just a binary and could be written in any language but it accepts POSIX options and writes to standard output.
Something along the lines of:
For each known set of input options:
Launch application with specified input.
Pipe output to a file.
Diff output to stored (desired) output.
If diff is not empty, record error.
(Btw, is this what you call an integration test rather than a unit test?)
Edit: I know how I would go about writing my own tool for this, I don't need help with the code. What I want to learn is if this has already been done.
DejaGnu is a mature and somewhat standard framework for writing test suites for CLI programs.
Here is a sample test taken from this tutorial:
# send a string to the running program being tested:
send "echo Hello world!\n"
# inspect the output and determine whether the test passes or fails:
expect {
-re "Hello world.*$prompt $" {
pass "Echo test"
}
-re "$prompt $" {
fail "Echo test"
}
timeout {
fail "(timeout) Echo test"
}
}
Using a well-established framework like this is probably going to be better in the long run than anything you can come up with yourself, unless your needs are very simple.
You are looking for BATS (Bash Automated Testing System):
https://github.com/bats-core/bats-core
From the docs:
example.bats contains
#!/usr/bin/env bats
#test "addition using bc" {
result="$(echo 2+2 | bc)"
[ "$result" -eq 4 ]
}
#test "addition using dc" {
result="$(echo 2 2+p | dc)"
[ "$result" -eq 4 ]
}
$ bats example.bats
✓ addition using bc
✓ addition using dc
2 tests, 0 failures
bats-core
Well, I think every language should have a way of execute an external process.
In C#, you could do something like:
var p = new Process();
p.StartInfo = new ProcessStartInfo(#"C:\file-to-execute.exe");
... //You can set parameters here, etc.
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardInput = true;
p.StartInfo.UseShellExecute = false;
p.Start();
//To read the standard output:
var output = p.StandardOutput.ReadToEnd();
I have never had to write to the standard input, but I believe it can be done by accessing to p.StandardInput as well. The idea is to treat both inputs as Stream objects, because that's what they are.
In Python there is the subprocess module. According to its documentation:
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes.
I had to do the same when writing unit tests for the code generation part of a compiler I write some months ago: Writing unit tests in my compiler (which generates IL)
We wrote should, a single-file Python program to test any CLI tool. The default usage is to check that a line of the output contains some pattern. From the docs:
# A .should file launches any command it encounters.
echo "hello, world"
# Lines containing a `:` are test lines.
# The `test expression` is what is found at the right of the `:`.
# Here 'world' should be found on stdout, at least in one line.
:world
# What is at the left of the `:` are modifiers.
# One can specify the exact number of lines where the test expression has to appear.
# 'moon' should not be found on stdout.
0:moon
Should can check occurrences counts, look for regular expressions, use variables, filter tests, parse json data, and check exit codes.
Sure, it's been done literally thousands of times. But writing a tool to run simple shell scripts or batch files like what you propose is a trivial task, hardly worth trying to turn into a generic tool.