Alloc memory in debugged process - gdb

I am attaching a process with ptrace syscall. It is possible to read/write memory with peek and poke but i want to alloc some memory in the remote process. Is it possible to do this ?

i want to alloc some memory in the remote process. Is it possible to do this ?
Presumably you want to allocate some memory using process's own malloc . Proof by existence:
(gdb) start
(gdb) print malloc(20)
$1 = 0x820430
So yes, it's possible.
The details are however quite messy: you'll need to read symbol table for the inferior process in order to find where it's malloc is, then construct a proper call frame and transfer control to mallocs address using correct ABI for your target process, and finally clean all of that up.
This is at least 10x harder than what you asked for in your other recent questions.

Related

Analyze Glibc heap memory

I research an embedded device that use GLIBC 2.25.
When I look at /proc/PID/maps I see under the heap section some anonymous sections ,I understand that sections create when the process use new
I dump those sections with dd and there is there interesting value that I want to understand is that buffer allocated or free, and what is the size of this buffer.
How can I do that please?
You can use the gdb (GNU Debugger) tool to inspect the memory of a running process. You can attach to the process using its PID and use the x command to examine memory at a specific address. You can also use the info proc mapping command to view the memory maps of the process, including the size of the heap. Additionally, you can use the heap command to list heap blocks and the malloc_info command to show detailed information about heap blocks.
You can also use the malloc_stats function to display information about the heap usage such as the number of bytes allocated, the number of bytes free and the number of bytes in use.
You can also use the pmap command to display the memory map of a process, including the heap size. This command is available on some systems and may not be present on others.
It's also worth noting that the /proc/PID/maps file can also give you an idea about the heap section of a process.
Please keep in mind that you need to have the right permission to access the process you want to inspect.
Instead of analyzing the memory from proc, you may want to try following options, limited to your env.
use tools like valgrind if you suspect any kind of leaks or invalid read/writes.
rather than looking at output of dd, attach to running process and inspect memory within process, gives you context to make sense of memory usage.
use logging to dump addresses of allocation/free/read/write. This allows you to build better understanding of memory usage.
You may have to use all of the above options depending upon the complexity of your task.

Why do I get SIGKILL instead of std::bad_alloc? [duplicate]

I'm developing application for an embedded system with limited memory (Tegra 2) in C++. I'm handling NULL results of new and new[] throughout the code which sometimes occurs but application is able to handle this.
The problem is that the system kills the process by SIGKILL if the memory runs out completely. Can I somehow tell that new should just return NULL instead of killing the process?
I am not sure what kind of OS You are using, but You should check if
it supports opportunistic memory allocation like Linux does.
If it is enabled, the following may happen (details/solution are specific to the Linux kernel):
Your new or malloc gets a valid address from the kernel. Even if there is not enough memory, because ...
The kernel does not really allocate the memory until the very moment of the first access.
If all of the "overcommitted" memory is used, the operating system has no chance but killing one of the involved processes. (It is too late to tell the program that there is not enough memory.) In Linux, this is called Out Of Memory Kill (OOM Kill). Such kills get logged in the kernel message buffer.
Solution: Disable overcommitting of memory:
echo 2 > /proc/sys/vm/overcommit_memory
Two ideas come to mind.
Write your own memory allocation function rather than depending on new directly. You mentioned you're on an embedded system, where special allocators are quite common in applications. Are you running your application directly on the hardware or are you running in a process under an executive/OS layer? If the latter, is there a system API provided for allocating memory?
Check out C++ set_new_handler and see if it can help you. You can request that a special function is invoked when a new allocation fails. Perhaps in that function you can take action to prevent whatever is killing the process from executing. Reference: http://www.cplusplus.com/reference/std/new/set_new_handler/

How to create a minidump with stack memory

My program creates a minidump on crash (using MiniDumpWriteDump from DBGHELP.DLL) and I would like to keep the size of the dump as low as possible while still having important memory information available. I have gone through the different possible combinations of flags and callback functionalities you can pass to MiniDumpWriteDump (links to debuginfo.com or MSDN).
I think I am limited to these MINIDUMP_TYPE flags, since it has to work on an old WinXP machine:
MiniDumpNormal
MiniDumpWithDataSegs
MiniDumpWithFullMemory
MiniDumpWithHandleData
MiniDumpFilterMemory
MiniDumpScanMemory
I am searching for a way how to combine these flags and the callback function to get a dump with the following requirements:
Relative small size (Full memory dump results in ~200MB filesize, I want max. 20MB)
Stack trace of the crashed thread, maybe also the stack trace of other threads but without memory info
Memory information of the whole stack of the crashed thread. This is where it gets complicated: Including stack memory should be no problem in terms of size but heap memory might be overkill.
The question is how can I limit the memory info to the crashed thread and how to include stack memory (local variables) of the whole call stack?
Is it also possible to include parts of the heap memory, like only these parts that are referenced by the current call stack?

Dynamical Memory Allocation / Making use of unused memory

I'm going to write an application that needs a lot of memory dynamically.
Most of the memory is used for caching purposes and is just used for speed ups.
Those parts could actually be freed on demand.
Unfortunately my kernel will kill the process if it runs out of memory. But it could
simply free memory. So what I want is very similar to the linux page cache as it is
explained here. Is it possible to implement such behaviour in userspace in a convenient way?
I'm thinking about implementing such a cache with "cache files" which are stored on a ramfs/tmpfs with memory mapped file IO, but i'm sure, that there is a more comfortable way.
Thanks in advance!
Yes this should be possible. Most kernels have a memory alloc method where the process sleeps until it gets the requested memory. ( all the kernels ive worked with have). If yours doesnt this may be a good time to implement one. You could check out the kmem functions in linux.
However this is a passive way of doing what youve asked. The process will be waiting until someone else frees up memory.
If you want to free up memory from your own process address space when theres no memory, this can be done easily from user space. You need to keep a journal of allocated memory and free the ones you dont need on demand when an alloc fails.

C++ Multithread program on linux memory issue

I'm developing a software that requires creation and deletion of a large number of threads.
When I create a thread the memory increases and when delete them (this is confirmed by using the command ps -mo THREAD -p <pid>), the memory related to the program/software does not decrease (top command). As a result I run out of memory.
I have used Valgrind to check for memory error/leak and I can't find any. This is on a debian box. Please let me know what the issue could be.
How are you deleting the threads?
The notes here http://www.kernel.org/doc/man-pages/online/pages/man3/pthread_join.3.html talk about needing to call join in some cases to free up resources.
You do not run out of memory.
The "free memory" you see in top command is actually not the memory that is available when required. Linux kernel uses as much as possible/useable of the free memory for its page cache. When a process requires memory, the kernel can throw away that page cache and provide that memory to a process.
In other words: linux uses the free memory, instead of just leaving it idling around...
Use free -m: In the row labeled "-/+ buffers/cache:" you will see the real amount of memory available for processes.