In my .ned file I have a simple module
simple Txc1
{
parameters:
int dummy_number = default(10);
}
When I run the simulation from the command prompt by specifying dummy_number I want i.e.,
opp_run -l tictoc omnetpp.ini --**.dummy_number=15
dummy_number equals 15 in my simulation.
But when instead of integer I want to specify a string
simple Txc1
{
parameters:
string dummy_string = default("Some dummy string");
}
opp_run -l tictoc omnetpp.ini --**.dummy_string="Test"
I get the following error
Cannot evaluate parameter 'dummy_string':(omnetpp::cModule)Tictoc1: unknown parameter 'Test' -- in module (Txc1) Tictoc1.tic (id=2), during network setup
Why my string is perceived as a parameter when it is just a value and how do I circumvent the problem?
Thanks
As long as your string does not contain a space, add backslashes before quotes, i.e.:
opp_run -l tictoc omnetpp.ini --**.dummy_string=\"TestWithoutSpace\"
Related
I have a nextflow script with a channel for paired file inputs. I am trying to extract a substring from the file inputs to use as part of the shell call. I am trying to use Groovy's regex matching to extract the substring, but since it is based on an input value, I am having trouble executing the matching. An alternative would be to perform the regex in bash as part of the process shell call, but I am interested in figuring out how to manipulate inputs within a process anyways, as I feel it would be useful for other things too. How can I perform intermediate Groovy code with the process inputs prior to the shell call?
process alignment {
input:
val files
output:
stdout
def matcher = "${files[1][0]}" =~ /.+\/bcl2fastq_out\/([^\/]+)\/.+/
# this is the culprit, as if I hardcode the first string it works
def project = matcher.findAll()[0][1]
"""
echo ${project}
"""
}
workflow {
files = Channel
.fromFilePairs("${params.out_dir}/**{_R1,_R2}_00?.fastq.gz", checkIfExists:true, size: 2)
alignment(files)
}
when I execute this, I get the error
No such variable: files
an example input string would look like extractions/test/bcl2fastq_out/project1/example_L001_R1_001.fastq.gz where I'm trying to extract the project1 substring
As you've already discovered, you can declare variables in the script block, before the command string. For example:
process alignment {
input:
tuple val(sample), path(fastq_files)
output:
tuple val(sample), path(output_file)
script:
output_file = "${sample}.bam"
"""
>&2 echo "Aligning ${sample}..."
touch "${output_file}"
"""
}
Note that these are global (within the process scope) unless you define them using the def keyword. If you don't need these elsewhere in your process definition, like in your example, a local variable (using def) is usually preferable. If, however, you need to access these in your output declaration, for example, then they will need to be global.
Note that the fromFilePairs factory method emits a tuple, where the first element is a group key and the second element is a list of files. The problem with just using val to declare the inputs is that the files in the second element will not be localized to the working directory when your script is run. To fix this, you can just change your input definition to something like:
input:
tuple val(sample), path(fastq_files)
The problem with this approach, is that we're unable to extract the parent directory name from the localized filenames. So you will need to pass this in somehow. Usually, you could just get the parent name from the first file in the tuple, using:
params.input_dir = './path/to/files'
params.pattern = '**_R{1,2}_00?.fastq.gz'
process alignment {
debug true
input:
tuple val(sample), val(project), path(fastq_files)
"""
echo "${sample}: ${project}: ${fastq_files}"
"""
}
workflow {
Channel
.fromFilePairs( "${params.input_dir}/${params.pattern}" )
.map { sample, reads ->
def project = reads[0].parent.name
tuple( sample, project, reads )
}
.set { reads }
alignment( reads )
}
But since the glob pattern has an additional wildcard, i.e. _00?, you may not necessarily get the results you expect. For example:
$ mkdir -p path/to/files/project{1,2,3}
$ touch path/to/files/project1/sample1_R{1,2}_00{1,2,3,4}.fastq.gz
$ touch path/to/files/project2/sample2_R{1,2}_00{1,2,3,4}.fastq.gz
$ touch path/to/files/project3/sample3_R{1,2}_00{1,2,3,4}.fastq.gz
$ nextflow run main.nf
N E X T F L O W ~ version 22.04.0
Launching `main.nf` [determined_roentgen] DSL2 - revision: f80ab33ac8
executor > local (12)
[a8/9235cc] process > alignment (12) [100%] 12 of 12 ✔
sample2: project2: sample2_R1_001.fastq.gz sample2_R1_004.fastq.gz
sample1: project1: sample1_R1_003.fastq.gz sample1_R2_001.fastq.gz
sample1: project1: sample1_R1_004.fastq.gz sample1_R2_003.fastq.gz
sample3: project3: sample3_R1_001.fastq.gz sample3_R2_001.fastq.gz
sample1: project1: sample1_R1_001.fastq.gz sample1_R2_004.fastq.gz
sample1: project1: sample1_R1_002.fastq.gz sample1_R2_002.fastq.gz
sample2: project2: sample2_R1_002.fastq.gz sample2_R2_002.fastq.gz
sample2: project2: sample2_R2_001.fastq.gz sample2_R2_004.fastq.gz
sample2: project2: sample2_R1_003.fastq.gz sample2_R2_003.fastq.gz
sample3: project3: sample3_R2_002.fastq.gz sample3_R2_004.fastq.gz
sample3: project3: sample3_R1_003.fastq.gz sample3_R1_004.fastq.gz
sample3: project3: sample3_R1_002.fastq.gz sample3_R2_003.fastq.gz
Fortunately, we can supply a custom file pair grouping strategy using a closure. This uses the readPrefix helper function:
workflow {
Channel
.fromFilePairs( "${params.input_dir}/${params.pattern}" ) { file ->
prefix = Channel.readPrefix(file, params.pattern)
suffix = file.simpleName.tokenize('_').last()
"${file.parent.name}/${prefix}_${suffix}"
}
.map { key, reads ->
def (project, sample) = key.tokenize('/')
tuple( sample, project, reads )
}
.set { reads }
alignment( reads )
}
Results:
$ nextflow run main.nf
N E X T F L O W ~ version 22.04.0
Launching `main.nf` [loving_cantor] DSL2 - revision: 5a76ac712f
executor > local (12)
[f4/74edbc] process > alignment (12) [100%] 12 of 12 ✔
sample1_003: project1: sample1_R1_003.fastq.gz sample1_R2_003.fastq.gz
sample2_002: project2: sample2_R1_002.fastq.gz sample2_R2_002.fastq.gz
sample1_002: project1: sample1_R1_002.fastq.gz sample1_R2_002.fastq.gz
sample2_003: project2: sample2_R1_003.fastq.gz sample2_R2_003.fastq.gz
sample2_004: project2: sample2_R1_004.fastq.gz sample2_R2_004.fastq.gz
sample2_001: project2: sample2_R1_001.fastq.gz sample2_R2_001.fastq.gz
sample1_001: project1: sample1_R1_001.fastq.gz sample1_R2_001.fastq.gz
sample1_004: project1: sample1_R1_004.fastq.gz sample1_R2_004.fastq.gz
sample3_001: project3: sample3_R1_001.fastq.gz sample3_R2_001.fastq.gz
sample3_004: project3: sample3_R1_004.fastq.gz sample3_R2_004.fastq.gz
sample3_002: project3: sample3_R1_002.fastq.gz sample3_R2_002.fastq.gz
sample3_003: project3: sample3_R1_003.fastq.gz sample3_R2_003.fastq.gz
I figured it out, if instead of just jumping into the shell script with the triple quotes, you can start specifying the process execution script with "script:" then run Groovy using the process inputs
process alignment {
input:
val files
output:
stdout
script:
test = (files[1][0] =~ '.+/test/([^/]+)/.+').findAll()[0][1]
"""
echo $test
"""
I want to use the command line argument on ros2.
In my node :
const std::string name ="test"; const rclcpp::ParameterValue test(10); const rclcpp::ParameterValue testArg(rclcpp::Node::declare_parameter(name,test,ParamDesc,true)); cout << testArg.to_value_msg().integer_value << "\n";
Then when I throw in with the argument:
$ ros2 run dip_sensors TI6843AOP_node --ros-args -p test:=42 10
The node starts well but it indicates 10 when I passed it 42.
I can set the argument this way later but not at startup :
$ ros2 param set /NameNode test 42 Set parameter successful
Then
$ ros2 param get /NameNode test Integer value is:42
Do you have any idea of my mistakes?
Thanks,
Dimi
I have the solution and it's simple. I misread the function description and to activate the overriding you must not put the argument true but false:
testArg(rclcpp::Node::declare_parameter(name,test,ParamDesc,true)); --> bad testArg(rclcpp::Node::declare_parameter(name,test,ParamDesc,false)); --> good
I am currently using a template_file to provision user_data into an aws_launch_configuration, like so:
resource "aws_launch_configuration" "launch_config" {
...
user_data = "${data.template_file.init.rendered}"
}
data "template_file" "init" {
template = "${file("router-init.sh.tpl")}"
vars {
hub_ip_addresses = "${join(",", aws_instance.gridHub.*.private_ip)}"
}
}
I am feeding in a variable (i.e. hub_ip_addresses) into the router-init.sh.tpl file, and in this file I am making use of the argument like so:
`#!/bin/sh
...
IFS=',' read -r -a array <<< "$hub_ip_addresses"
for element in "${array[#]}"
do
#do stuff with $element
done
Basically, I am splitting the string based on a delimiter, and then looping through each ip address in the array.
This bash script works fine when I run it on my local machine -- however, when terraform executes it, it throws a error:
* data.template_file.init: data.template_file.init: failed to render : parse error at 13:25: expected expression but found invalid sequence "#"
I'm supposing the '#' symbol is causing an issue. Is there a reason why this is so? Do I need to escape it with a '\' ?
EDIT: Not sure if related to this issue, but in the preceeding line in the bash script, IFS=',' read -r -a array <<< "$hub_ip_addresses", the <<< seems to be causing everything else that follows to look as if they are inside a comment (i.e. greyed out as if it was within a quotation mark ').)
You need to escape the $ characters in your template by doubling them up or Terraform will attempt to interpolate them as the input variables to the template.
The template docs cover this briefly although the example given is for inline templates rather than for all templates, including those that are loaded with the file() function.
So something like:
#!/bin/sh
...
IFS=',' read -r -a array <<< "$hub_ip_addresses"
for element in "$${array[#]}"
do
#do stuff with $$element
done
Is there any way to do required argument for crystal program?
For example
./myprog ~/Music -r
Instead of
./myprog -d ~/Music -r
So my program wont run if there's no [directory] argument. Right now using "option_parser" and can only do -arguments.
There is no way to create required arguments using option_parser, but you can parse arguments and throw an error or exit if there is no argument passed you expect:
require "option_parser"
directory = nil
parser = OptionParser.new
parser.on("-d DIR", "Directory [required]") do |d|
directory = d
end
parser.parse ARGV
if directory.nil?
# directory argument was not set
# print help and exit
puts parser
exit 1
else
# ...
end
I am trying to write a program in cpp using maps...
My goal is to allow * alone when we pass via comand line argument
And i have to restrict the characters coming along with * (ie)abc and abc
int main(int argc,char *argv[])
{
prinf("%s\n",argv[1]);
char str[256] = "";
int length;
strcpy(str,argv[1]);
length = strlen(str);
if((str[0] = '*' )&& (length == 1))
printf("success\n");
else
printf("failure");
return 0;
}
when i run this program ,getting the following output
$ ./exec *
$ 1
$ failure
argv[1] = 1(printing 1 instead of *)
failure because of not getting * as input argument,it results in failure.
Could someone help me to solve this issue?
Items on the command line are processed by the shell.
In the shell, the * performs a glob operation
In general, a glob operator, when standing alone, will result in a list of all the files in the current directory.
The character * has a special meaning in most shells. Your shell expands it to the files in the current directory resulting in a completely different arguments than the one you intended.
To avoid this, you can do:
$ ./exec "*"
or
$ ./exec '*'
If your shell is bash, then you can also turn off the globbing expansion with:
$ set -o noglob
or
$ set -f
but I would suggest the former options (above) as it's "minimal" and doesn't affect your shell's default behaviour otherwise.