launch ./veins_launchd with sumo option "-scale" - veins

I want launch launch ./veins_launchd with sumo option "-scale".
This is my launch command:
$ ./veins_launchd -vv --command='sumo --scale 6'
This command execute correctly.
But as soon as I start running simulation, it pop follow error message:
Seed is 0
Finding free port number...
Claiming lock on port
...found port 38677
Starting SUMO (sumo --scale 6 -c due.actuated.sumocfg) on port 38677, seed 0
Releasing lock on port
Cleaning up
Result: "<?xml version="1.0"?>
<status>
<exit-code>-1</exit-code>
<start>1651068679</start>
<end>1651068679</end>
<status>Could not start SUMO (sumo --scale 6 -c due.actuated.sumocfg): [Errno 2] No such file or directory: 'sumo --scale 6': 'sumo --scale 6'</status>
<stdout><![CDATA[]]></stdout>
<stderr><![CDATA[]]></stderr>
</status>
"
I have read link follow:
How to give sumo options with sumo-launchd.py?
My problem is almost the same with his.
Since that problem is post at 2017 and not have been solved, I decide to repost this problem again.

You can simply add the line
<scale value="6"/>
to your due.actuated.sumocfg.

Related

SonarQube C++ build-wrapper produce empty build-wrapper-dump.json

I'm using sonarQube C/C++ on WIN10 machine on a vxworks (arm) cross compile project.
After running:
build-wrapper-win-x86-64 --out-dir bw_output make MyProject.make
Build succeeded, but I get empty build-wrapper-dump.json file.
looking at the build-wrapper.log file I see:
process created with pid: 25256
image path name: <C:\WindRiver_6.9\gnu\4.3.3-vxworks-6.9\x86-win32\bin\ccarm.exe>
command line: <ccarm ...long line... MwInterfaceService.cpp >
working directory: <CommonLib\>
isWow64: 1
skipping process C:\WindRiver_6.9\gnu\4.3.3-vxworks-6.9\x86-win32\bin\ccarm.exe with pid: 25256
Is there a way to make build-wrapper to fill build-wrapper-dump.json file correctly?
thanks,
Mr. G

Debugging with Gdb in RISCV (spike: unrecognized option --gdb-port)

After building the RISCV tools and GCC (cloned from lowrisc, isa-sim and not riscv-tools), i'm stuck in the debugging with Gdb phase here.
In the second terminal target remote in gdb times out.
In the first terminal when i run spike --gdb-port 9824 pk tests/debug or spike --gdb-port 9824 pk hello.c it yields:
spike: unrecognized option --gdb-port
usage: spike [host options] <target program> [target options]
Host Options:
-p <n> Simulate <n> processors
-m <n> Provide <n> MB of target memory
-d Interactive debug mode
-g Track histogram of PCs
-h Print this help message
--ic=<S>:<W>:<B> Instantiate a cache model with S sets,
--dc=<S>:<W>:<B> W ways, and B-byte blocks (with S and
--l2=<S>:<W>:<B> B both powers of 2).
--extension=<name> Specify RoCC Extension
--extlib=<name> Shared library to load
I don't know if it has to do with configuring gdb on its own ? Or is it built and configured when i ran ./build.sh for the riscv tools.
If not, could you please correct the --gdb-port command (I'm new to linux) I've tried --gdb-port=9824 or --gdb-port:9824 and it's the same.
Thank you
Message spike: unrecognized option --gdb-port says that spike, not gdb can't recognize option. Spike is from riscv-isa-sim, not from riscv-tools. And LowRisc variant of Spike - https://github.com/lowRISC/riscv-isa-sim is many commits behind master:
This branch is 3 commits ahead, 172 commits behind riscv:master.
Latest commit e220bc4 on May 19, 2016 #wsong83 wsong83 Merge commit '0d084d5' into update
One of not ported commit added gdb support to spike from https://github.com/riscv/riscv-isa-sim (and documented it in https://github.com/riscv/riscv-isa-sim#debugging-with-gdb), but it is not pulled to https://github.com/lowRISC/riscv-isa-sim (and not documented at https://github.com/lowRISC/riscv-isa-sim). gdb-related commits were from Oct 2016, Jun 2016, May 2016, and the --gdb-port was added in d1d8863086c57f04236418f21ef8a7fbfc184b0b (Mar 19, 2016) https://github.com/riscv/riscv-isa-sim/commit/d1d8863086c57f04236418f21ef8a7fbfc184b0b
+ fprintf(stderr, " --gdb-port=<port> Listen on <port> for gdb to connect\n");
+ parser.option(0, "gdb-port", 1, [&](const char* s){gdb_port = atoi(s);});
You can try merging changes between isa sims or ask lowRisc authors to merge or just try to use spike from riscv...

vtkXOpenGLRenderWindow error in a terminal in VNC

I run a vnc server on my workstation and connect with another computer. Both of the server and client are running Debian Jessie:
$ uname -a
Linux debian-VAIO 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2+deb8u2 (2016-06-25) x86_64 GNU/Linux
Xfce is installed for the vnc server and xstartup is:
$ cat ~/.vnc/xstartup
#!/bin/sh
xrdb $HOME/.Xresources
xsetroot -solid grey
export XKL_XMODMAP_DISABLE=1
exec startxfce4
In the ~/.bashrc, I have added the lines below:
export LIBGL_ALWAYS_INDIRECT=y
export LD_PRELOAD='/usr/lib/x86_64-linux-gnu/libstdc++.so.6'
Currently, when I ssh to the server with ssh -X, the $DISPLAY is localhost:10.0, and the vtk related command will launch an X window. However, if a vnc server is launched in the ssh terminal, logging into xfce and opening a terminal in it, the $DISPLAY is :1.0 and vtk fails to launch an X window, with the following message:
Error: In /home/orobix/Desktop/vmtk-build/VTK/Rendering/OpenGL/vtkXOpenGLRenderWindow.cxx, line 394
vtkXOpenGLRenderWindow (0x2c30f10): Could not find a decent visual
Error: In /home/orobix/Desktop/vmtk-build/VTK/Rendering/OpenGL/vtkXOpenGLRenderWindow.cxx, line 394
vtkXOpenGLRenderWindow (0x2c30f10): Could not find a decent visual
Error: In /home/orobix/Desktop/vmtk-build/VTK/Rendering/OpenGL/vtkXOpenGLRenderWindow.cxx, line 394
vtkXOpenGLRenderWindow (0x2c30f10): Could not find a decent visual
Error: In /home/orobix/Desktop/vmtk-build/VTK/Rendering/OpenGL/vtkXOpenGLRenderWindow.cxx, line 613
vtkXOpenGLRenderWindow (0x2c30f10): GLX not found. Aborting.
I think it might be relate to the missed config of X server in xstartup of vnc server. But I don't know how to do it. Could anyone help me debug it? Any further information will be provided if needed. Thanks!
20160823 Update
I accepted the suggestion of VirtualGL + TurboVNC and installed the two components. A simple config using vglserver_config was done according to http://www.virtualgl.org/vgldoc/2_1_1/#hd009001 part 6.1. Then I made vncserver of TurboVNC to run xfce with xstartup.turbovnc:
#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
OS=`uname -s`
exec startxfce4
On the client, I use vncviewer of TurboVNC to connect the server. The desktop looks quite different from that in a default vncserver. Then I tried
/opt/VirtualGL/bin/vglrun vmtkimageviewer -ifile image_volume_voi.vti
in which the vmtkimageviewer should open up a window, but finally it gives out an error:
Executing vmtkimageviewer ...
X Error of failed request: GLXBadContext
Major opcode of failed request: 156 (GLX)
Minor opcode of failed request: 6 (X_GLXIsDirect)
Serial number of failed request: 17
Current serial number in output stream: 16
And unfortunately, I find that with ssh -X, I cannot launch the X window as I did before, although $DISPLAY is still localhost:10.0. The output is:
Executing vmtkimageviewer ...
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 156 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Value in failed request: 0x0
Serial number of failed request: 37
Current serial number in output stream: 38
But I think I am closer to the fact, as it looks like VirtualGL works(?). What to do for the next?
Plain and simple the X server variant that's being used by Xrdp (either Xvnc or Xvfb) simply doesn't support OpenGL / GLX and thus programs that need OpenGL will not work in that configuration.
Fallback to Mesa swrast or llvmpipe is possible though: http://www.mesa3d.org/llvmpipe.html
Finally, I got everything done. VirtualGL only is enough for my purpose. Just install and config as the section 6.2 & 6.3 in its user's guide. Use the default vncserver and vncviewer as usual, TurboVNC is not needed. Use vglrun when you need an OpenGL support. One modification is that I should remove export LIBGL_ALWAYS_INDIRECT=y in my .bashrc.
It runs fluent in my case. #datenwolf: Thank you again!

abrt - use event to copy/move coredump to custom location

I cannot seem to find a way to configure my abrt event to copy the coredump to a custom location. The reason I want to do this is to prevent abrt from pruning my coredumps if the crash directory exceeds MaxCrashReportsSize. With the prerequisite that I have no control over how abrt is configured I would like to export the coredump to a support directory as soon as it is created.
EVENT=post-create pkg_name=raptorio analyzer=CCpp
test -f coredump && { mkdir -p /opt/raptorio/cores; cp -f coredump /opt/raptorio/cores/$(basename `cat executable`).core; }
This event will save one coredump for each C/C++ binary from my raptorio RPM package. When my program crashes abrt prints the following errors in the syslog:
Aug 30 08:28:41 abrtd: mkdir: cannot create directory `/opt/raptorio/cores': Permission denied
Aug 30 08:28:41 abrtd: cp: cannot create regular file `/opt/raptorio/cores/raptord.core': No such file or directory
Aug 30 08:28:41 abrtd: 'post-create' on '/var/spool/abrt/ccpp-2016-08-30-08:28:10-31213' exited with 1
I see that the abrt event runs as root:root but it is jailed somehow, possibly due to SELinux? I am using abrt 2.0.8 on centos 6.
/opt is not the right place to keep transient files. cores should go in /var/raptorio/cores, perhaps. See the Filesystem Hierarchy Standard
Assuming your program runs as user 'nobody', make sure 'nobody' has write permissions on that directory, and you should be all set.

Why can't and environment variable be seen by an executable if it is run on two or more nodes?

I am writing a program (I'll call it the "launcher") in C++ using MPI to "Spawn" a second executable (the "slave"). Depending on how many nodes a cluster has available for the launcher, it will launch slaves on each node and the slave will communicate back with the launcher also through MPI. When the slave is done with its math, it tells the launcher that the node is now available and the launcher Spawns another slave to the free node. The point is to run 1000 independent calculations, that depend on a second executable, on an heterogeneous group of machines.
This is working in my own computer, where I create a "fake" machinefile (or hostfile) giving two nodes to the program: localhost and localhost. The launcher Spawns two slaves and when one of them ends another slave is launched. This tells me that the Spawning process is working correctly.
When I move it to the cluster at my lab (whe use torque/maui to manage it), it also works if I ask for 1 (one) node. If I ask for more, I get a missing library error (libimf.so, to be precise. A library from the intel compilers). The lib is there and the node can see it, since the program runs if i ask for just one node.
My PBS that works looks like this:
#!/bin/bash
#PBS -q small
#PBS -l nodes=1:ppn=8:xeon
#PBS -l walltime=1:00:00
#PBS -N MyJob
#PBS -V
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/mpich2.shared.exec/lib/:/opt/intel/composerxe-2011.3.174/compiler/lib/intel64/:/usr/local/boost/lib/
log_file="output_pbs.txt"
cd $PBS_O_WORKDIR
echo "Beginning PBS script." > $log_file
echo "Executing on hosts ($PBS_NODEFILE): " >> $log_file
cat $PBS_NODEFILE >> $log_file
echo "Running your stuff now!" >> $log_file
# mpiexec is needed in order to let "launcher" call MPI_Comm_spawn.
/usr/local/mpich2.shared.exec/bin/mpiexec -hostfile $PBS_NODEFILE -n 1 /home/user/launhcer --hostfile $PBS_NODEFILE -r 1 >> $log_file 2>&1
echo "Fim do pbs." >> $log_file
When I try two or more nodes, the launcher doesn't Spawn any executables.
I get an output like this:
Beginning PBS script.
Executing on hosts (/var/spool/torque/aux//2742.cluster):
node3
node3
node3
node3
node3
node3
node3
node3
node2
node2
node2
node2
node2
node2
node2
node2
Running your stuff now!
(Bla bla bla from launcher initialization)
Spawning!
/usr/local/mpich2.shared.exec/bin/hydra_pmi_proxy: error while loading shared libraries: libimf.so: cannot open shared object file: No such file or directory
I found one other person with a problem such as mine in a mailing list, but no solution. (http://lists.mcs.anl.gov/pipermail/mpich-discuss/2011-July/010442.html). The only answer suggested trying to find if the node can see the lib (if the directory where the lib was stored was mounted on the node), so I tried an
ssh node2 ls /opt/intel/composerxe-2011.3.174/compiler/lib/intel64/libimf.so >> $log_file
inside my PBS script and the lib exists in a folder that the node can see.
In my opinion, it seems that torque/maui is not exporting the environment variables to all nodes (even though I don't know why it wouldn't), so when I try to use MPI_Spawn to run another executable in another node, it can't find the lib.
Does that make any sense? If so, could you suggest a solution?
Can anyone offer any other ideas?
Thanks in advance,
Marcelo
EDIT:
Following the suggestion in one of the answers, I installed OpenMPI to test the option "-x VARNAME" with mpiexec. In the PBS script I changed the execution line to the following:
/usr/local/openmpi144/bin/mpiexec -x LD_LIBRARY_PATH -hostfile $PBS_NODEFILE -n 1 /var/dipro/melomcr/GSAFold_2/gsafold --hostfile $PBS_NODEFILE -r 1 >> $log_file 2>&1
but got the following error messages:
[node5:02982] [[3837,1],0] ORTE_ERROR_LOG: A message is attempting to be sent to a process whose contact information is unknown in file rml_oob_send.c at line 105
[node5:02982] [[3837,1],0] could not get route to [[INVALID],INVALID]
[node5:02982] [[3837,1],0] ORTE_ERROR_LOG: A message is attempting to be sent to a process whose contact information is unknown in file base/plm_base_proxy.c at line 86
From the internet I could gather that this error usually comes from executing mpiexec more than onece, like in /path/to/mpiexec mpiexec -n 2 my_program which is not my case.
I believe I should add that the spawned "slave" program communicates with the "launcher" program using a port. The launcher opens a port with MPI_Open_port and MPI_Comm_accept, then it waits for the slave program to connect when the slave runs MPI_Comm_connect.
Like I said above, all of this works (with MPICH2) when I ask for just one node. With OpenMPI I get the above error even when I ask for only one node.
You're correct. The remoting calls far below the clustering software do not transfer environment variables.
You could use the -x option to mpiexec to pass environment variables to other nodes.