How to find the task which core dumped? - gdb

Given a core dump of say a router/switch say running vxworks. How to find the task which core dumped ?
My guess list down the tasks and see which is in running state, is that right ? Please let me know .

You can use coreDumpShow() from coreDumpLib.h (vxWorks 6.9).
-> coreDumpShow 0,1
NAME IDX VALID ERRNO SIZE CKSUM TYPE TASK
----------- --- ----- ---------- ---------- ----- ------------ ----------
vxcore1.z 1 Y N/A 0x000ef05b OK KERNEL_TASK t1
Core Dump detailed information:
-------------------------------
Time: THU JAN 01 00:01:42 1970 (ticks = 6174)
Task: "t1" (0x611e0a20)
Process: "(Kernel)" (0x6017a500)
Description: fatal kernel task-level exception!
Exception number: 0xb
Program counter: 0x6003823e
Stack pointer: 0x604d3da8
Frame pointer: 0x604d3fb0
value = 0 = 0x0
You might also want to try looking in coreDumpUtilLib:
coreDumpIsAvailable( ) - is a kernel core dump available for retrieval
coreDumpNextGet( ) - get the next kernel core dump on device
coreDumpInfoGet( ) - get information on a kernel core dump
coreDumpOpen( ) - open an existing kernel core dump for retrieval
coreDumpClose( ) - close a kernel core dump
coreDumpRead( ) - read from a kernel core dump file
coreDumpCopy( ) - copy a kernel core dump to the given path

Related

STM32cubeide with stm32f103c8t6 could not verify ST device

I am new to embedded and stm32cubeide, self teaching so I can use it in a group project related to university studies.
After purchasing a "blue pill" from aliexpress, I realized I might of bought a clone chip. I followed the instructions shown here (stm32 community site), and I'm still getting an error that the ide cannot verify my ST device.
Here is what I have as output:
Open On-Chip Debugger 0.11.0+dev-00443-gcf12591 (2022-02-09-13:33) [ST Internal]
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
swv
Info : Listening on port 6666 for tcl connections
Info : Listening on port 4444 for telnet connections
Info : STLINK V2J39S7 (API v2) VID:PID 0483:3748
Info : Target voltage: 3.286227
Info : clock speed 4000 kHz
Info : stlink_dap_op_connect(connect)
Info : SWD DPIDR 0x2ba01477
Info : STM32F103C8Tx.cpu: Cortex-M3 r2p1 processor detected
Info : STM32F103C8Tx.cpu: target has 6 breakpoints, 4 watchpoints
Info : starting gdb server for STM32F103C8Tx.cpu on 3333
Info : Listening on port 3333 for gdb connections
Info : accepting 'gdb' connection on tcp/3333
Info : device id = 0x20036410
Info : flash size = 128kbytes
Warn : GDB connection 1 on target STM32F103C8Tx.cpu not halted
undefined debug reason 8 - target needs reset
O.K.
O.K.:0xE00FFFD0
Info : dropped 'gdb' connection
shutdown command invoked
I see in the console "undefined debug reason 8 - target needs reset", is this the problem? If so what can I do to solve this? If not, then what do I do other than purchasing another board?
Below is my test Debug.cfg, in case I need to change something in there:
# This is an genericBoard board with a single STM32F103C8Tx chip
#
# Generated by STM32CubeIDE
# Take care that such file, as generated, may be overridden without any early notice. Please have a look to debug launch configuration setup(s)
source [find interface/stlink-dap.cfg]
set WORKAREASIZE 0x5000
transport select "dapdirect_swd"
set CHIPNAME STM32F103C8Tx
set BOARDNAME genericBoard
# Enable debug when in low power modes
set ENABLE_LOW_POWER 1
# Stop Watchdog counters when halt
set STOP_WATCHDOG 1
# STlink Debug clock frequency
set CLOCK_FREQ 4000
# Reset configuration
# use software system reset if reset done
reset_config none
set CONNECT_UNDER_RESET 0
set CORE_RESET 0
# ACCESS PORT NUMBER
set AP_NUM 0
# GDB PORT
set GDB_PORT 3333
# BCTM CPU variables
source [find target/stm32f1x.cfg]
# SWV trace
set USE_SWO 0
set swv_cmd "-protocol uart -output :3344 -traceclk 16000000"
source [find board/swv.tcl]
Thanks
I found some FT232 that I had in spare, and I was able to program the chip using the stm32 programmer software and a generated hex file from the ide.
I'll use this method if ever I run into cloned chips and the st link v2 if ever I get a genuine board.

100% CPU for Docker Container when started via entrypoint

We have written a small C++ application which mainly does some supervising of other processes via ZeroMQ. So most of the time, the application idles and periodcally sends and receives some requests.
We built a docker image based on ubuntu which contains just this application, some dependencies and an entrypoint.sh. The entrypoint basically runs as /bin/bash, manipulates some configuration files based on environment variables and then starts the application via exec.
Now here's the strange part. When we start the application manually without docker, we get a CPU usage of nearly 0%. When we start the same application as docker image, the CPU usage goes up to 100% and blocks exactly one CPU core.
To find out what was happening, we set the entrypoint of our image to /bin/yes (just to make sure the container keeps running) and then started a bash inside the running container. From there we started entrypoint.sh manually and the CPU again was at 0%.
So we are wondering, what could cause this situation. Is there anything we need to add to our Dockerfile to prevent this?
Here is some output generated with strace. I used strace -p <pid> -f -c and waited five minutes to collect some insights.
1. Running with docker run (100% CPU)
strace: Process 12621 attached with 9 threads
strace: [ Process PID=12621 runs in x32 mode. ]
[...]
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
71.26 17.866443 144 124127 nanosleep
14.40 3.610578 55547 65 31 futex
14.07 3.528224 1209 2918 epoll_wait
0.10 0.024760 4127 6 1 restart_syscall
0.10 0.024700 0 66479 poll
0.05 0.011339 4 2902 3 recvfrom
0.02 0.005517 2 2919 write
0.01 0.001685 1 2909 read
0.00 0.000070 70 1 1 connect
0.00 0.000020 20 1 socket
0.00 0.000010 1 18 epoll_ctl
0.00 0.000004 1 6 sendto
0.00 0.000004 1 4 fcntl
0.00 0.000000 0 1 close
0.00 0.000000 0 1 getpeername
0.00 0.000000 0 1 setsockopt
0.00 0.000000 0 1 getsockopt
------ ----------- ----------- --------- --------- ----------------
100.00 25.073354 202359 36 total
2. Running with a dummy entrypoint and docker exec (0% CPU)
strace: Process 31394 attached with 9 threads
[...]
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
67.32 12.544007 102 123355 nanosleep
14.94 2.784310 39216 71 33 futex
14.01 2.611210 869 3005 epoll_wait
2.01 0.373797 6 66234 poll
1.15 0.213487 71 2999 recvfrom
0.41 0.076113 15223 5 1 restart_syscall
0.09 0.016295 5 3004 write
0.08 0.014458 5 3004 read
------ ----------- ----------- --------- --------- ----------------
100.00 18.633677 201677 34 total
Note that in the first case i started strace slightly earlier so there are some different calls which can all be traced back to initialization code.
The only difference I could find is the line Process PID=12621 runs in x32 mode. when using docker run. Could this be an issue?
Also note that in both measurements the total runtime is about 20 seconds while the process was running for five minutes.
Some further investigations on the 100% CPU case. I checked the process with top -H -p <pid> and only the parent process was using 100% CPU while the child threads were all mostly idling. But when calling strace -p <pid> on the parent process I could verify, that the process did not do anything (no output was generated).
So I do have a process which is using one whole core of my CPU doing exactly nothing.
As it turned out some legacy part of the software was waiting for console input in a while loop:
while (!finished) {
std::cin >> command;
processCommand(command)
}
So this worked fine when running locally and with docker exec. But since the executable was started as a docker service, there was no console present. Therefore std::cin was non-blocking and returned immediately. This way we created an endless loop without any sleeps which naturally caused a 100% CPU usage.
Thanks to #Botje for guiding us through the debugging process.

How can i map control to it's device?

amixer -c 0 controls:
...
numid=22,iface=MIXER,name='Capture Switch'
numid=24,iface=MIXER,name='Capture Switch',index=1
numid=21,iface=MIXER,name='Capture Volume'
numid=23,iface=MIXER,name='Capture Volume',index=1
...
arecord -l:
card 0: PCH [HDA Intel PCH], device 0: ALC662 rev3 Analog [ALC662 rev3 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 2: ALC662 rev3 Alt Analog [ALC662 rev3 Alt Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
I have two controls with the same names. I know, that one is handle stream Card=0,device=0, second - Card=0,device=2. But how can i get from my program which control response for specific device? Contol with numid = 21 or control with numid = 23 handle device 2?
I can find some useful info about it in /proc/asound/cord0/codec#0. But I need to detect in from my code.
Controls of interface MIXER are not directly associated with any device.
The only way to find out more would be to use some hardware-dependent mechanism. However, in the case of HDA, reading codec#x is not very useful because the exact algorithm the kernel uses to map widgets to controls is not guaranteed.

Profiling OpenCV using OProfile

I have this basic OpenCV program:
#include <iostream>
#include "opencv2/opencv.hpp"
int main(){
std::cout<<"Reading Image..."<<std::endl;
cv::Mat img = cv::imread("all_souls_000000.jpg", cv::IMREAD_GRAYSCALE);
if(!img.data)
std::cerr<<"Error reading image"<<std::endl;
return 0;
}
Which creates the executable ReadImage. I want to profile it using OProfile. However, running:
operf ./ReadImage > ReadImage.log
Returns:
Kernel profiling is not possible with current system config.
Set /proc/sys/kernel/kptr_restrict to 0 to collect kernel samples.
operf: Profiler started
* * * * WARNING: Profiling rate was throttled back by the kernel * * * *
The number of samples actually recorded is less than expected, but is
probably still statistically valid. Decreasing the sampling rate is the
best option if you want to avoid throttling.
Profiling done.
Why this happens? What is the best way to profile OpenCV?
I was able to run operf on an opencv app, with this result, is this what you are looking for?
Profiling started at Tue Jan 31 16:52:48 2017
Profiling stopped at Tue Jan 31 16:52:53 2017
-- OProfile/operf Statistics --
Nr. non-backtrace samples: 337018
Nr. kernel samples: 5603
Nr. user space samples: 331415
Nr. samples lost due to sample address not in expected range for domain: 0
Nr. lost kernel samples: 0
Nr. samples lost due to sample file open failure: 0
Nr. samples lost due to no permanent mapping: 0
Nr. user context kernel samples lost due to no app info available: 0
Nr. user samples lost due to no app info available: 0
Nr. backtraces skipped due to no file mapping: 0
Nr. hypervisor samples dropped due to address out-of-range: 0
Nr. samples lost reported by perf_events kernel: 0

Ember CLI build killed

I build my Ember CLI app inside a docker container on startup. The build fails without an error message, it just says killed:
root#fstaging:/frontend/source# node_modules/ember-cli/bin/ember build -prod
version: 1.13.15
Could not find watchman, falling back to NodeWatcher for file system events.
Visit http://www.ember-cli.com/user-guide/#watchman for more info.
Buildingember-auto-register-helpers is not required for Ember 2.0.0 and later please remove from your `package.json`.
Building.DEPRECATION: The `bind-attr` helper ('app/templates/components/file-selector.hbs' # L1:C7) is deprecated in favor of HTMLBars-style bound attributes.
at isBindAttrModifier (/app/source/bower_components/ember/ember-template-compiler.js:11751:34)
Killed
The same docker image successfully starts up in another environment, but without hardware constraints. Does Ember CLI have hard-coded hardware constraints for the build process? The RAM is limited to 128m and swap to 2g.
That is likely not enough memory for Ember CLI to do what it needs. You are correct in that, the process is being killed because of an OOM situation. If you log in to the host and take a look at the dmesg output you will probably see something like:
V8 WorkerThread invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=0
V8 WorkerThread cpuset=867781e35d8a0a231ef60a272ae5d418796c45e92b5aa0233df317ce659b0032 mems_allowed=0
CPU: 0 PID: 2027 Comm: V8 WorkerThread Tainted: G O 4.1.13-boot2docker #1
Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
0000000000000000 00000000000000d0 ffffffff8154e053 ffff880039381000
ffffffff8154d3f7 ffff8800395db528 ffff8800392b4528 ffff88003e214580
ffff8800392b4000 ffff88003e217080 ffffffff81087faf ffff88003e217080
Call Trace:
[<ffffffff8154e053>] ? dump_stack+0x40/0x50
[<ffffffff8154d3f7>] ? dump_header.isra.10+0x8c/0x1f4
[<ffffffff81087faf>] ? finish_task_switch+0x4c/0xda
[<ffffffff810f46b1>] ? oom_kill_process+0x99/0x31c
[<ffffffff811340e6>] ? task_in_mem_cgroup+0x5d/0x6a
[<ffffffff81132ac5>] ? mem_cgroup_iter+0x1c/0x1b2
[<ffffffff81134984>] ? mem_cgroup_oom_synchronize+0x441/0x45a
[<ffffffff8113402f>] ? mem_cgroup_is_descendant+0x1d/0x1d
[<ffffffff810f4d77>] ? pagefault_out_of_memory+0x17/0x91
[<ffffffff815565d8>] ? page_fault+0x28/0x30
Task in /docker/867781e35d8a0a231ef60a272ae5d418796c45e92b5aa0233df317ce659b0032 killed as a result of limit of /docker/867781e35d8a0a231ef60a272ae5d418796c45e92b5aa0233df317ce659b0032
memory: usage 131072kB, limit 131072kB, failcnt 2284203
memory+swap: usage 262032kB, limit 262144kB, failcnt 970540
kmem: usage 0kB, limit 9007199254740988kB, failcnt 0
Memory cgroup stats for /docker/867781e35d8a0a231ef60a272ae5d418796c45e92b5aa0233df317ce659b0032: cache:340KB rss:130732KB rss_huge:10240KB mapped_file:8KB writeback:0KB swap:130960KB inactive_anon:72912KB active_anon:57880KB inactive_file:112KB active_file:40KB unevictable:0KB
[ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
[ 1993] 0 1993 380 1 6 3 17 0 sh
[ 2025] 0 2025 203490 32546 221 140 32713 0 npm
Memory cgroup out of memory: Kill process 2025 (npm) score 1001 or sacrifice child
Killed process 2025 (npm) total-vm:813960kB, anon-rss:130184kB, file-rss:0kB
It might be worthwhile to profile the container using something like https://github.com/google/cadvisor to find out what kind of memory maximums it may need.