SBCL statistical profiler warns of undefined function - profiling

SBCL 1.3.9 produces the following error when I attempt to run the statistical profiler. Is start-profiling not exported?
* (in-package :cl-user)
* (require :sb-sprof)
* (sb-sprof:with-profiling (:report :flat) (bnb::solve))
; in: SB-SPROF:WITH-PROFILING (:REPORT :FLAT)
; (SB-SPROF:START-PROFILING :MAX-DEPTH 4611686018427387903 :THREADS
; (LIST SB-THREAD:*CURRENT-THREAD*))
;
; caught STYLE-WARNING:
; undefined function: SB-SPROF:START-PROFILING
;
; compilation unit finished
; Undefined function:
; SB-SPROF:START-PROFILING
; caught 1 STYLE-WARNING condition
debugger invoked on a UNDEFINED-FUNCTION in thread
#<THREAD "main thread" RUNNING {100292C913}>:
The function SB-SPROF:START-PROFILING is undefined.
Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.
restarts (invokable by number or by possibly-abbreviated name):
0: [ABORT] Exit debugger, returning to top level.

It seems a problem with your distribution. please update to the last SBCL wich is
The most recent version is SBCL 1.3.18, released May 30, 2017
But if you take a look at the sources in github
git log -S start-profiling --source --all
commit 63f714af62d0ccdb9d4a793ab0245b036c3d8531 refs/tags/sbcl_1_0
Author: Juho Snellman <jsnell#iki.fi>
Date: Fri Nov 17 02:15:47 2006 +0000
0.9.18.58:
Further SB-SPROF improvements.
* Allocation profiling on gencgc. When the profiler is running in
allocation profiling mode, the gc will signal profiler ticks
when new allocation regions are opened.
* Add :LOOP keyword argument to WITH-PROFILING, to allow specifying
whether the body should be evaluated repeatedly until the maximum
sample count is reached.
* Improve merging of code-components with multiple debug-funs,
better handling of multiple functions with the same name
* More documentation
* Also update the stepper documentation
commit 554397512eea9d6e30067c5edc2def42006a5327 refs/tags/sbcl_0_8_12
Author: Christophe Rhodes <csr21#cam.ac.uk>
Date: Mon Jun 21 11:33:35 2004 +0000
0.8.11.20:
Add SB-SPROF contrib
That funcion was added a lot time ago, so please try the latest code and follow the manual
also if you inspect the code it has this mark
#-win32
So if you are using windows 32 bits this should not work

Related

Error calling the kernel: error code is: -6 on openCL kernel

my question is little different than that being asked on this platform, but I think it is related to programming.
I am using vitis and alveo accelerators. They basically have a host program( written in C++) and than a openCL file, which is also kind of C programming.
The host asks the kernel to do some thing by putting the input data through arguments and than results are send back to the host.
There are 3 ways to execute the data on kernel( Software emulation, HW emulation and Hardware). Generally, we start with Software emulation and than with Hardware. I have always seen that when code gets build and run with software emulation, it also builds and run on Hardware.
But I am getting a strange error as below:
INFO: Reading /home/m992c693/Workspace C2Q TC _paper/qht_3d_system/Hardware/vector addition.xclbin
Loading: '/home/m992c693/Workspace C20 TC_paper/qht_3d_system/Hardware/vector_addition.xclbin
Trying to program device[6]: xilinx_u250_gen3x16_xdma_shell_4 |
Device[6]: program successful
-./src/host.cpp:585 Error calling krnt_c2q =
XRT build version: 2.13.466
Build hash: 5505e402c2calffe45eb6d3a9399b23a0dc8776
Build date: 2022-04-14 17:45:07
Git branch: 2022.1
PID: 243276
UID: 100586768
[Tue Sep 27 17:35:28 2622 GMT!
HOST: alveo
EXE: /home/m992c693/Workspace C20 TC_paper/qht_3d/Hardware/qht_3d
[XRT] ERROR: No such compute unit ‘c2q:c2q 1': Invalid argument
:Kernel (program, “c2q", Serr), error code is: -6
It would be great if someone can guide me why I am getting this error and what kind of debugging procedure I should follow.

Error: VECTORSZ is too small

I am new to working with Promela and in particular SPIN. I have a model which I am trying verify and can't understand SPIN's output to resolve the problem.
Here is what I did:
spin -a untitled.pml
gcc -o pan pan.c
./pan
The output was as follows:
pan:1: VECTORSZ is too small, edit pan.h (at depth 0)
pan: wrote untitled.pml.trail
(Spin Version 6.4.5 -- 1 January 2016)
Warning: Search not completed
+ Partial Order Reduction
Full statespace search for:
never claim - (none specified)
assertion violations +
acceptance cycles - (not selected)
invalid end states +
State-vector 8172 byte, depth reached 0, errors: 1
0 states, stored
0 states, matched
0 transitions (= stored+matched)
0 atomic steps
hash conflicts: 0 (resolved)
I then ran SPIN again to try to determine the cause of the problem by examining the trail file. I used this command:
spin -t -v -p untitled.pml
This was the result:
using statement merging
spin: trail ends after -4 steps
#processes: 1
( global variable dump omitted )
-4: proc 0 (:init::1) untitled.pml:173 (state 1)
1 process created
According to this output (as I understand it), the verification is failing during the "init" procedure. The relevant code from within untitled.pml is this:
init {
int count = 0;
int ordinal = N;
do // This is line 173
:: (count < 2 * N + 1) ->
At this point I have no idea what is causing the problem since to me, the "do" statement should execute just fine.
Can anyone please help me in understanding SPINs output so I can remove this error during the verification process? The model does produce the correct output for reference.
You can simply ignore the trail file in this case, it is not relevant at all.
The error message
pan:1: VECTORSZ is too small, edit pan.h (at depth 0)
tells you that the size of directive VECTORSZ is too small to successfully verify your model.
By default, VECTORSZ has size 1024.
To fix this issue, try compiling your verifier with a larger VECTORSZ size:
spin -a untitled.pml
gcc -DVECTORSZ=2048 -o run pan.c
./run
If 2048 doesn't work too, try some more (increasingly larger) values.

elki-cli versus elki gui, I don't get equal results

Though the terminal on ubuntu:
db#morris:~/lisbet/elki-master/elki/target$ elki-cli -algorithm outlier.lof.LOF -dbc.parser ArffParser -dbc.in /home/db/lisbet/AllData/literature/WBC/WBC_withoutdupl_norm_v10_no_ids.arff -lof.k 8 -evaluator outlier.OutlierROCCurve -rocauc.positive yes
giving
# ROCAUC: 0.6230046948356808
and in ELKI's GUI:
Running: -verbose -dbc.in /home/db/lisbet/AllData/literature/WBC/WBC_withoutdupl_norm_v10_no_ids.arff -dbc.parser ArffParser -algorithm outlier.lof.LOF -lof.k 8 -evaluator outlier.OutlierROCCurve -rocauc.positive yes
de.lmu.ifi.dbs.elki.datasource.FileBasedDatabaseConnection.parse: 18 ms
de.lmu.ifi.dbs.elki.datasource.FileBasedDatabaseConnection.filter: 0 ms
LOF #1/3: Materializing LOF neighborhoods.
de.lmu.ifi.dbs.elki.index.preprocessed.knn.MaterializeKNNPreprocessor.k: 9
Materializing k nearest neighbors (k=9): 223 [100%]
de.lmu.ifi.dbs.elki.index.preprocessed.knn.MaterializeKNNPreprocessor.precomputation-time: 10 ms
LOF #2/3: Computing LRDs.
LOF #3/3: Computing LOFs.
LOF: complete.
de.lmu.ifi.dbs.elki.algorithm.outlier.lof.LOF.runtime: 39 ms
ROCAUC: **0.6220657276995305**
I don't understand why the 2 ROCAUCcurves aren't the same.
My goal in testing this is to be comfortable with my result, that what I do is right, but it is hard when I don't get matching results. When I see that my settings are right I will move on to making my own experiments, that I can trust.
Pass cli as first command line parameter to launche the CLI, or minigui to launch the MiniGUI. The following are equivalent:
java -jar elki/target/elki-0.6.5-SNAPSHOT.jar cli
java -jar elki/target/elki-0.6.5-SNAPSHOT.jar KDDCLIApplication
java -jar elki/target/elki-0.6.5-SNAPSHOT.jar de.lmu.ifi.dbs.elki.application.KDDCLIApplication
This will work for any class extending the class AbstractApplication.
Your can also do:
java -cp elki/target/elki-0.6.5-SNAPSHOT.jar de.lmu.ifi.dbs.elki.application.KDDCLIApplication
(Which will load 1 class less, but this is usually not worth the effort.)
This will work for any class that has a standard public void main(String[]) method, as this is the standard Java invocation.
But notice that -h currently will still print 0.6.0 (2014, January), that value was not updated for the 0.6.5 interim versions. It will be bumped for 0.7.0. That version number is therefore not reliable.
As for the differences you observed: try varing k by 1. If I recall correctly, we changed the meaning of the k parameter to be more consistent across different algorithms. (They are not consistent in literature anyway.)

How to run record instruction-history and function-call-history in GDB?

(EDIT: per the first answer below the current "trick" seems to be using an Atom processor. But I hope some gdb guru can answer if this is a fundamental limitation, or whether there adding support for other processors is on the roadmap?)
Reverse execution seems to be working in my environment: I can reverse-continue, see a plausible record log, and move around within it:
(gdb) start
...Temporary breakpoint 5 at 0x8048460: file bang.cpp, line 13.
Starting program: /home/thomasg/temp/./bang
Temporary breakpoint 5, main () at bang.cpp:13
13 f(1000);
(gdb) record
(gdb) continue
Continuing.
Breakpoint 3, f (d=900) at bang.cpp:5
5 if(d) {
(gdb) info record
Active record target: record-full
Record mode:
Lowest recorded instruction number is 1.
Highest recorded instruction number is 1005.
Log contains 1005 instructions.
Max logged instructions is 200000.
(gdb) reverse-continue
Continuing.
Breakpoint 3, f (d=901) at bang.cpp:5
5 if(d) {
(gdb) record goto end
Go forward to insn number 1005
#0 f (d=900) at bang.cpp:5
5 if(d) {
However the instruction and function histories aren't available:
(gdb) record instruction-history
You can't do that when your target is `record-full'
(gdb) record function-call-history
You can't do that when your target is `record-full'
And the only target type available is full, the other documented type "btrace" fails with "Target does not support branch tracing."
So quite possibly it just isn't supported for this target, but as it's a mainstream modern one (gdb 7.6.1-ubuntu, on amd64 Linux Mint "Petra" running an "Intel(R) Core(TM) i5-3570") I'm hoping that I've overlooked a crucial step or config?
It seems that there is no other solution except a CPU that supports it.
More precisely, your kernel has to support Intel Processor Tracing (Intel PT). This can be checked in Linux with:
grep intel_pt /proc/cpuinfo
See also: https://unix.stackexchange.com/questions/43539/what-do-the-flags-in-proc-cpuinfo-mean
The commands only works in record btrace mode.
In the GDB source commit beab5d9, it is nat/linux-btrace.c:kernel_supports_pt that checks if we can enter btrace. The following checks are carried out:
check if /sys/bus/event_source/devices/intel_pt/type exists and read the type
do a syscall (SYS_perf_event_open, &attr, child, -1, -1, 0); with the read type, and see if it returns >=0. TODO: why not use the C wrapper?
The first check fails for me: the file does not exist.
Kernel side
cd into the kernel 4.1 source and:
git grep '"intel_pt"'
we find arch/x86/kernel/cpu/perf_event_intel_pt.c which sets up that file. In particular, it does:
if (!test_cpu_cap(&boot_cpu_data, X86_FEATURE_INTEL_PT))
goto fail;
so intel_pt is a pre-requisite.
How I've found kernel_supports_pt
First grep for:
git grep 'Target does not support branch tracing.'
which leads us to btrace.c:btrace_enable. After a quick debug with:
gdb -q -ex start -ex 'b btrace_enable' -ex c --args /home/ciro/git/binutils-gdb/install/bin/gdb --batch -ex start -ex 'record btrace' ./hello_world.out
Virtual box does not support it either: Extract execution log from gdb record in a VirtualBox VM
Intel SDE
Intel SDE 7.21 already has this CPU feature, checked with:
./sde64 -- cpuid | grep 'Intel processor trace'
But I'm not sure if the Linux kernel can be run on it: https://superuser.com/questions/950992/how-to-run-the-linux-kernel-on-intel-software-development-emulator-sde
Other GDB methods
More generic questions, with less efficient software solutions:
call graph: List of all function calls made in an application
instruction trace: Displaying each assembly instruction executed in gdb
At least a partial answer (for the "am I doing it wrong" aspect) - from gdb-7.6.50.20140108/gdb/NEWS
* A new record target "record-btrace" has been added. The new target
uses hardware support to record the control-flow of a process. It
does not support replaying the execution, but it implements the
below new commands for investigating the recorded execution log.
This new recording method can be enabled using:
record btrace
The "record-btrace" target is only available on Intel Atom processors
and requires a Linux kernel 2.6.32 or later.
* Two new commands have been added for record/replay to give information
about the recorded execution without having to replay the execution.
The commands are only supported by "record btrace".
record instruction-history prints the execution history at
instruction granularity
record function-call-history prints the execution history at
function granularity
It's not often that I envy the owner of an Atom processor ;-)
I'll edit the question to refocus upon the question of workarounds or plans for future support.

disabling c++ output message for sql loader

I have a C++ code in which I am using sql loader using system(). When SQL Loader executes while running the code, I got below mentioned messages which I want to disable:
SQL*Loader: Release 10.2.0.1.0 - Production on Thu Mar 14 14:11:25 2013
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Commit point reached - logical record count 20
Commit point reached - logical record count 40
Commit point reached - logical record count 60
Commit point reached - logical record count 80
Remember that the system function uses the shell to execute the command. So you can use normal shell redirection:
system("/some/program > /dev/null");
You can use the silent=ALL option to suppress these messages:
system("/orahomepath/bin/sqlldr silent=ALL ...")
See also SQL*Loader Command-Line Reference:
As SQL*Loader executes, you also see feedback messages on the screen, for example:
Commit point reached - logical record count 20
You can suppress these messages by specifying SILENT with one or more values:
...
ALL - Implements all of the suppression values: HEADER, FEEDBACK, ERRORS, DISCARDS, and PARTITIONS.
Depending on the sql*ldr implementation, you might still end up with one or the other output - if you need complete silence, see the answer from #Joachim below.