I'm facing an issue with my gstreamer pipeline application on a iMX6Q based board. When I start to record the video input into a file, after a randomly duration, the pipeline freezes a few seconds on the main display before to play again. This phenomenon can occur several times for the same pipeline. The recorded video contain as many "jumps" as freezes that occur.
I have the same issue with a gst-launch build pipeline :
gst-launch-1.0 imxv4l2videosrc ! imxvideoconvert_ipu ! tee name=t t. ! queue ! imxipuvideosink t. ! queue ! vpuenc_h264 bitrate=20000 quant=0 ! matroskamux ! filesink async=true location=/home/root/exam/noname.mp4
I have tried with imxvpuenc_h264 with same result.
So I have tried to add DEBUG outputs and add an timeoverlay to catch timestamp, so the issue has occured at 14, 24, and 44.077 second and below the associated log file :
enter gst log
Here is another output that occurs only once but I've never catched it again, but I have noticed that a each "freed block" occurence, the pipeline unfreezes...
0:02:21.136883684 696 0x1fc7d80 INFO imxphysmemallocator
phys_mem_allocator.c:190:gst_imx_phys_mem_allocator_free:
freed block 0x69203548 at phys addr 0x3c9c0000 with size: 235200
0:02:21.137171351 696 0x1fc7d80 INFO imxphysmemallocator phys_mem_allocator.c:190:gst_imx_phys_mem_allocator_free:
freed block 0x69203688 at phys addr 0x3c5c0000 with size: 235200
0:02:21.139984017 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a611ad8 at phys addr 0x3ce40000 with 235200
bytes
0:02:21.478791684 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a611a88 at phys addr 0x3c5c0000 with 235200
bytes
0:02:21.487979017 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x69202d40 at phys addr 0x3c9c0000 with 235200
bytes
0:02:35.688026352 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a6058a8 at phys addr 0x3ce80000 with 235200
bytes
0:02:35.718750019 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x69202f70 at phys addr 0x3cec0000 with 235200
bytes
0:02:35.751753019 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a611948 at phys addr 0x3cf00000 with 235200
bytes
0:02:35.785059352 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a6118a8 at phys addr 0x3cf40000 with 235200
bytes
0:02:35.819149686 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x69203728 at phys addr 0x3cf80000 with 235200
bytes
0:02:35.851361352 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a6119e8 at phys addr 0x3cfc0000 with 235200
bytes
0:02:35.886579019 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a611a38 at phys addr 0x3d000000 with 235200
bytes
0:02:35.918826019 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x69209da8 at phys addr 0x3d040000 with 235200
bytes
0:02:35.949963686 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x69209e48 at phys addr 0x3d080000 with 235200
bytes
0:02:35.984716352 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x69209ee8 at phys addr 0x3d0c0000 with 235200
bytes
0:02:36.018140019 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x69209f88 at phys addr 0x3d100000 with 235200
bytes
0:02:36.051139686 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a613470 at phys addr 0x3d140000 with 235200
bytes
0:02:36.084571352 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a613510 at phys addr 0x3d180000 with 235200
bytes
0:02:36.117948019 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a6135b0 at phys addr 0x3d1c0000 with 235200
bytes
0:02:36.151357352 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a613650 at phys addr 0x3d200000 with 235200
bytes
0:02:36.184409686 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a6136f0 at phys addr 0x3d240000 with 235200
bytes
0:02:36.217806686 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a613790 at phys addr 0x3d280000 with 235200
bytes
0:02:36.250990019 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a613420 at phys addr 0x3d2c0000 with 235200
bytes
0:02:36.283248686 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x69209e98 at phys addr 0x466c0000 with 235200
bytes
0:02:36.316750019 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a605b28 at phys addr 0x46700000 with 235200
bytes
0:02:36.349774019 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x69202de0 at phys addr 0x46740000 with 235200
bytes
0:02:36.383089019 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a611808 at phys addr 0x46780000 with 235200
bytes
0:02:36.416394686 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a613740 at phys addr 0x467c0000 with 235200
bytes
0:02:36.452180686 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a613600 at phys addr 0x46800000 with 235200
bytes
0:02:36.481756019 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a6134c0 at phys addr 0x46840000 with 235200
bytes
0:02:36.516267686 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x69209f38 at phys addr 0x46880000 with 235200
bytes
0:02:54.977848021 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x6a613560 at phys addr 0x468c0000 with 235200
bytes
0:02:54.984635355 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x69209d58 at phys addr 0x46900000 with 235200
bytes
0:02:55.271305021 696 0x1fc7c30 INFO imxphysmemallocator phys_mem_allocator.c:174:gst_imx_phys_mem_allocator_alloc:
allocated memory block 0x69209cb8 at phys addr 0x46940000 with 235200
bytes
0:02:55.623891021 696 0x1fc7d80 INFO imxphysmemallocator phys_mem_allocator.c:190:gst_imx_phys_mem_allocator_free:
freed block 0x6a6118a8 at phys addr 0x3cf40000 with size: 235200
I hope that someone could help me.
Have a nice day.
EDIT 1 :
On the bus, there is no message related to state changed when freeze happens, neither other message...
As per your command running on board, you are doing 2 works parallelly one is sink to display and one encoding then saving to a file.
So have tested separately both pipelines will work if you do them one by one?
I just remember when I was using NXP-based imx6Q there is some memory-related size allocation in dtsi file inside VPU or in uboot we are setting. Please check their mem_size and try to increase the size then check. Maybe that will help you.
Thanks
Related
When I run the following program through valgrind (valgrind ./a.out --leak-check=yes):
int main() {
char* ptr = new char;
return 0;
}
the report contains this:
==103== error calling PR_SET_PTRACER, vgdb might block
==103==
==103== HEAP SUMMARY:
==103== in use at exit: 1 bytes in 1 blocks
==103== total heap usage: 2 allocs, 1 frees, 72,705 bytes allocated
==103==
==103== LEAK SUMMARY:
==103== definitely lost: 1 bytes in 1 blocks
==103== indirectly lost: 0 bytes in 0 blocks
==103== possibly lost: 0 bytes in 0 blocks
==103== still reachable: 0 bytes in 0 blocks
==103== suppressed: 0 bytes in 0 blocks
==103== Rerun with --leak-check=full to see details of leaked memory
What's the extra 72,704-byte allocation valgrind is reporting? It seems to be taken care of before the program is over, so I'm guessing it's something the compiler is doing. I'm using gcc on an Ubuntu subsystem in Windows 10.
Edit: The memory leak was intentional in this example, but I get similar messages about an extra allocation regardless of whether or not there's a leak.
As you surmise, the extra allocation is part of the runtime, and it is correctly cleaned up as you can see in Valgrind's output. Don't worry about it.
If you're asking specifically what it is, you'll have to read the C++ runtime library of your specific compiler and version (libc since you're using gcc). But again, it shouldn't concern you.
My valgrind is telling me that it found non-freed heap memory for the most trivial C++ code.
My code is shown as follows:
#include <iostream>
#include <string>
int main() {
std::cout << "Hello!!!!" << std::endl;
return 0;
}
And the result of valgrind is here:
==12455== HEAP SUMMARY:
==12455== in use at exit: 72,704 bytes in 1 blocks
==12455== total heap usage: 2 allocs, 1 frees, 73,728 bytes allocated
==12455==
==12455== 72,704 bytes in 1 blocks are still reachable in loss record 1 of 1
==12455== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==12455== by 0x4EC3EFF: ??? (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21)
==12455== by 0x40106C9: call_init.part.0 (dl-init.c:72)
==12455== by 0x40107DA: call_init (dl-init.c:30)
==12455== by 0x40107DA: _dl_init (dl-init.c:120)
==12455== by 0x4000C69: ??? (in /lib/x86_64-linux-gnu/ld-2.23.so)
==12455==
==12455== LEAK SUMMARY:
==12455== definitely lost: 0 bytes in 0 blocks
==12455== indirectly lost: 0 bytes in 0 blocks
==12455== possibly lost: 0 bytes in 0 blocks
==12455== still reachable: 72,704 bytes in 1 blocks
==12455== suppressed: 0 bytes in 0 blocks
==12455==
==12455== For counts of detected and suppressed errors, rerun with: -v
==12455== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Is this a bug of valgrind?
This is due to the way the C++ standard library works. The containers allocate chunks of memory (called pools) and manage them internally. They use basically a memory manager of their own, rather than relying on system's memory manager. Therefore, when an object is destroyed, it's memory is freed by the internal allocator for reuse, but not given back to the operating system.
This is also described in valgrind's FAQ here.
To generalize a bit more, valgrind is a very useful tool, but you should not aim for 0 leaks, but rather to understand its reports and find leaks that indicate a problem in the code.
I use valgrind 3.14.0 under Ubuntu 19.04 and i dont get any detections. I ran with --leak-check=fulland without. Maybe its just some versions of valgrind.
I'm implementing a libudev based monitoring code for USB devices under the hidraw driver. I've implemented the standard example from the web and checked for memory leaks with valgrind and gdb.
/*******************************************
libudev example.
This example prints out properties of
each of the hidraw devices. It then
creates a monitor which will report when
hidraw devices are connected or removed
from the system.
This code is meant to be a teaching
resource. It can be used for anyone for
any reason, including embedding into
a commercial product.
The document describing this file, and
updated versions can be found at:
http://www.signal11.us/oss/udev/
Alan Ott
Signal 11 Software
2010-05-22 - Initial Revision
2010-05-27 - Monitoring initializaion
moved to before enumeration.
*******************************************/
I was unhappy to find that some libudev functions that are not supposed to allocate memory are leaking. I traced this by exiting (after all objects are unreff'ed) at differrent points and looking at the valgrind report. Specifically This code leaks:
int main (void)
{
struct udev *udev;
struct udev_enumerate *enumerate;
struct udev_list_entry *devices, *dev_list_entry;
struct udev_device *dev, *devParent;
struct udev_monitor *mon;
int fd;
/* Create the udev object */
udev = udev_new();
if (!udev)
{
printf("Can't create udev\n");
exit(1);
}
/* This section sets up a monitor which will report events when
blah blah....
"hidraw" devices. */
/* Set up a monitor to monitor hidraw devices */
mon = udev_monitor_new_from_netlink(udev, "udev");
udev_monitor_filter_add_match_subsystem_devtype(mon, "hidraw", NULL);
udev_monitor_enable_receiving(mon);
/* Get the file descriptor (fd) for the monitor.
This fd will get passed to select() */
fd = udev_monitor_get_fd(mon);
/* Create a list of the devices in the 'hidraw' subsystem. */
enumerate = udev_enumerate_new(udev);
udev_enumerate_add_match_subsystem(enumerate, "hidraw");
if (1)
{
// leak debug block
udev_enumerate_unref(enumerate);
udev_monitor_unref(mon);
udev_unref(udev);
return 0;
}
udev_enumerate_scan_devices(enumerate);
devices = udev_enumerate_get_list_entry(enumerate);
/* For each item enumerated, print out its information.
Here is the valgrind output:
==11424== HEAP SUMMARY:
==11424== in use at exit: 4,096 bytes in 1 blocks
==11424== total heap usage: 11 allocs, 10 frees, 28,086 bytes allocated
==11424==
==11424== LEAK SUMMARY:
==11424== definitely lost: 0 bytes in 0 blocks
==11424== indirectly lost: 0 bytes in 0 blocks
==11424== possibly lost: 0 bytes in 0 blocks
==11424== still reachable: 4,096 bytes in 1 blocks
==11424== suppressed: 0 bytes in 0 blocks
==11424== Rerun with --leak-check=full to see details of leaked memory
If I place the "leak debug block" one line prior to its above position valgrind exits with a clean result of 0 bytes leaked.
If I advance the block one line down the code the next function increases the leak size and components:
==14262== in use at exit: 8,192 bytes in 2 blocks
==14262== total heap usage: 45 allocs, 43 frees, 150,907 bytes allocated
==14262==
==14262== LEAK SUMMARY:
==14262== definitely lost: 0 bytes in 0 blocks
==14262== indirectly lost: 0 bytes in 0 blocks
==14262== possibly lost: 0 bytes in 0 blocks
==14262== still reachable: 8,192 bytes in 2 blocks
==14262== suppressed: 0 bytes in 0 blocks
==14262== Rerun with --leak-check=full to see details of leaked memory
It gets worse after the next line and it is of concern because my code needs to run over years and such leaks can accumulate unchecked.
Any suggestions why it happens and how to keep it under control?
Seems that these memory leaks related to the hash tables as reported by valgrind are not a concern see discussion in
https://github.com/libratbag/libratbag/issues/405 and a relevant Red Hat bug report at https://bugzilla.redhat.com/show_bug.cgi?id=1280334.
I searched around but there seems no answers for me so I decided to ask here. So I used valgrind to check my program,here is the result
==24810== HEAP SUMMARY:
==24810== in use at exit: 1,478 bytes in 30 blocks
==24810== total heap usage: 50 allocs, 20 frees, 43078 bytes allocated
==24810==
==24810== LEAK SUMMARY:
==24810== definitely lost: 0 bytes in 0 blocks
==24810== indirectly lost: 0 bytes in 0 blocks
==24810== possibly lost: 0 bytes in 0 blocks
==24810== still reachable: 1,478 bytes in 30 blocks
==24810== suppressed: 0 bytes in 0 blocks
Is that a leak?
If so, what could be the reason?
It isn't a true leak in that the 30 extra chunks that were allocated are still reachable. It appears that you failed to free some structures at the end of your program's run. Note that the run time libraries will sometimes leave a few allocated objects around at the end but this doesn't feel like one of those cases.
Not a leak, it just means that some blocks of memory are still reachable at termination. To look for true memory leaks look at "definitely lost" and "indirectly lost"
See this post: Still Reachable Leak detected by Valgrind
I have a C++ program that uses a shared C library (namely Darknet) to load and make use of lightweight neural networks.
The program run flawlessly under Ubuntu Trusty on x86_64 box, but crashes with segmentation fault under the same OS but on the ARM device. The reason of the crash is that calloc returns NULL during memory allocation for an array. The code looks like the following:
l.filters = calloc(c * n * size * size, sizeof(float));
...
for (i = 0; i < c * n * size * size; ++i)
l.filters[i] = scale * rand_uniform(-1, 1);
So, after trying to write the first element, the application halts with segfault.
In my case the amount of the memory to be allocated is 4.7 MB, while there is more than 1GB available. I also tried to run it after reboot to exclude the heap fragmentation, but with the same result.
What is more interesting, when I am trying to load a larger network, it works just fine. And the two networks have the same configuration of the layer for which the crash happens...
Valgrind tells me nothing new:
==2591== Invalid write of size 4
==2591== at 0x40C70: make_convolutional_layer (convolutional_layer.c:135)
==2591== by 0x2C0DF: parse_convolutional (parser.c:159)
==2591== by 0x2D7EB: parse_network_cfg (parser.c:493)
==2591== by 0xBE4D: main (annotation.cpp:58)
==2591== Address 0x0 is not stack'd, malloc'd or (recently) free'd
==2591==
==2591==
==2591== Process terminating with default action of signal 11 (SIGSEGV)
==2591== Access not within mapped region at address 0x0
==2591== at 0x40C70: make_convolutional_layer (convolutional_layer.c:135)
==2591== by 0x2C0DF: parse_convolutional (parser.c:159)
==2591== by 0x2D7EB: parse_network_cfg (parser.c:493)
==2591== by 0xBE4D: main (annotation.cpp:58)
==2591== If you believe this happened as a result of a stack
==2591== overflow in your program's main thread (unlikely but
==2591== possible), you can try to increase the size of the
==2591== main thread stack using the --main-stacksize= flag.
==2591== The main thread stack size used in this run was 4294967295.
==2591==
==2591== HEAP SUMMARY:
==2591== in use at exit: 1,731,358,649 bytes in 2,164 blocks
==2591== total heap usage: 12,981 allocs, 10,817 frees, 9,996,704,911 bytes allocated
==2591==
==2591== LEAK SUMMARY:
==2591== definitely lost: 16,645 bytes in 21 blocks
==2591== indirectly lost: 529,234 bytes in 236 blocks
==2591== possibly lost: 1,729,206,304 bytes in 232 blocks
==2591== still reachable: 1,606,466 bytes in 1,675 blocks
==2591== suppressed: 0 bytes in 0 blocks
==2591== Rerun with --leak-check=full to see details of leaked memory
==2591==
==2591== For counts of detected and suppressed errors, rerun with: -v
==2591== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 402 from 8)
Killed
I am really confused what might be the reason. Could anybody help me?