I'm new to GDB and can't get the command history (auto complete/up arrow to old commands working).
At my root I have a .gbd_init file (and I also have an identical file in my working directory):
set history save
set history filename ~/.gbd_history
set history on
set history expansion on
I also have an empty .gbd_history file at my root.
In GBD 'show history' gives:
(gdb) show history
expansion: History expansion on command input is on.
filename: The filename in which to record the command history is
"/home/jenny/C_Programs/CS575_Algorithms/HW3/Problem1/.gdb_history".
save: Saving of the history record on exit is on.
size: The size of the command history is unlimited.
(gdb)
But the GBD command 'show commands' yields nothing and the up-arrows and tab don't do anything.
The .gbd_history file exists but is empty.
jenny#jennys-virtual-machine ~/C_Programs/CS575_Algorithms/HW3/Problem1 $ ls -ls .gbd*
0 -rwxrwxrwx 1 jenny jenny 0 Oct 1 22:16 .gbd_history
4 -rwxrwxrwx 1 jenny jenny 94 Oct 1 22:01 .gbdinit
What am I missing?
Thanks for any help. Typing in each command is getting old fast.
Jenny
At my root I have a .gbd_init file (and I also have an identical file
in my working directory):
You have misspelled gdb init file name. It should be named .gdbinit. .gbd_init is not get executed by gdb and history saving is not turned on.
Your .gdbinit script obviously is not executed, as you can see from your gdb_history filename, which has it's default value (gdb_history in current directory).
Try executing this init script in your current directory with gdb -x .gdbinit, as it may be disabled by default. At least my gdb wants me to set some kind of auto-load safe-path before it will allow loading of init scripts residing outside of home directory. See details here.
If you are looking to use history only to not have to repeat commands, you should probably just write a script with all those commands, and execute it with gdb -x script etc.
And surely check this SO thread on how to enable command history easily.
Related
I am using the system c++ call to execute the shell script the caller program is running as root but the shell sctipt which is called form the c++ code is running as different user.
How can I make sure the shell script should also run as root user like the c++ binary.
I don't want to rely on using sudo command as it can ask for password.
> [user#user ~]$ ll a.out temp.sh
> -rwsrwsr-x 1 root root 8952 Jun 14 13:16 a.out
> -rwxrwxr-x 1 user user 34 Jun 14 15:43 temp.sh
[user#user ~]$ cat temp.sh
#!/bin/bash read -n 1 -p "Hello"
[user#user ~]$ ps aux | grep temp
root 13247 0.0 0.0 13252 1540 pts/0 S+ 15:44 0:00 ./a.out ./temp.sh
user 13248 0.0 0.0 113152 2544 pts/0 S+ 15:44 0:00 /bin/bash ./temp.sh
c++ code
#include <bits/stdc++.h>
using namespace std;
int main(int argc, char *argv[])
{
system(argv[1]);
return 0;
}
A few bits of documentation to start:
From man 3 system's caveats section:
Do not use system() from a privileged program (a set-user-ID or set-group-ID program, or a program with capabilities) because strange values for some environment variables might be used to subvert system integrity. For example, PATH could be manipulated so that an arbitrary program is executed with privilege. Use the exec(3) family of functions instead, but not execlp(3) or execvp(3) (which also use the PATH environment variable to search for an executable).
system() will not, in fact, work properly from programs with set-user-ID or set-group-ID privileges on systems on which /bin/sh is bash version 2: as a security measure, bash 2 drops privileges on startup. Debian uses a different shell, dash(1), which does not do this when invoked as sh.)
And from the bash manual's description of the -p command line argument (Emphasis added):
Turn on privileged mode. In this mode, the $BASH_ENV and $ENV files are not processed, shell functions are not inherited from the environment, and the SHELLOPTS, BASHOPTS, CDPATH and GLOBIGNORE variables, if they appear in the environment, are ignored. If the shell is started with the effective user (group) id not equal to the real user (group) id, and the -p option is not supplied, these actions are taken and the effective user id is set to the real user id. If the -p option is supplied at startup, the effective user id is not reset. Turning this option off causes the effective user and group ids to be set to the real user and group ids.
So even if your /bin/sh doesn't drop privileges when run, bash will when it's run in turn without explicitly telling it not to.
So one option is to scrap using system(), and do a lower-level fork()/exec() of bash -p your-script-name.
Some other approaches to allowing scripts to run at elevated privileges are mentioned in Allow suid on shell scripts. In particular the answer using setuid() to change the real UID looks like it's worth investigating.
Or configure sudo to not require a password for a particular script for a given user.
Also see Why should I not #include <bits/stdc++.h>?
I installed flatpak using guix, but it segfaulted on startup. I wanted to debug it, but guix installs a wrapper script for flatpak, so I get this error when trying to run it under gdb:
"/home/user/.guix-profile/bin/flatpak": not in executable format: file format not recognized
and I tried to edit the wrapper script to call gdb, but this wrapper script is not even editable by root, because it is owned by root and has read-only permissions.
Simply copy the script to your current working directory:
cp /home/user/.guix-profile/bin/flatpak .
Mark it as writable:
chmod +w flatpak
Edit the script with your favourite text editor, to replace the string exec -a with exec gdb --args.
And finally, run it with any arguments you provided before, when it misbehaved:
./flatpak remote-add flathub https://flathub.org/repo/flathub.flatpakrepo
In this particular case, this wasn't immediately super-useful, because a debug symbol output hasn't been built for this package. But at least I could get a backtrace out of gdb.
On the docker image debian:stretch-slim, couldn't delete a specific folder on a NFS drive, using rm -rf /folder-name as root (or rm-rf * after entering the folder-name).
Got the following error back:
rm: cannot remove 'test-ikmgfjhv/dev/.nfse47cf31c6b1dd52500000009': Device or resource busy
After a lot of searching, eventually got to the following link:
https://uisapp2.iu.edu/confluence-prd/pages/viewpage.action?pageId=123962105
Which Describes exactly why those files exist in NFS and how to handle them.
As I wasn't using the same machine the process runs on (another container), so in my case, I had to work around that and first make sure the process using the file is being killed on the first machine, then try to delete it on the second one, according to the project's needs.
It is possible that the .nfs file is attached to a process that is busy or running (like an open file, for example, a vim file).
For example, if the hidden file is .nfs000000000189806400000085, run this command to get the pid:
lsof .nfs000000000189806400000085
this will output the PID and other info related to that file
then kill the process:
kill - 9
Be aware that if the file was not saved you will lose the information.
While running any command if you get error like :
/home/mmandi/testcases/.nfs000000e75853 :device or resource busy.
Go to the directory where this file is being shown.
For e.g - In this case : /home/mmandi/testcases/
Do following :
# ls -la : This will display contents of the directory along with files starting with "."
Here it displays the .nfs000000e7585 file.
# lsof .nfs000000e7585
This will list down the PID
# Use Kill -9 PID.
I successfully compiled a MUD source code, and it says in the instructions to start up the server using
nohup ./startup &
although when I do this it gives me this error:
$ nohup: ignoring input and appending output to `nohup.out'
nohup: failed to run command `./startup': Permission denied
I have looked all over the internet to find the answer. A few of them said to put my cygwin directory in the root folder (I am using windows 7) and its directory is C:\cygwin
so thats not a problem.. Can anyone help me with this please??
Try chmod +x startup, maybe your startup file is not executable.
From "man nohup":
If the standard output is a terminal, all output written by the named
utility to its standard output shall be appended to the end of the
file nohup.out in the current directory. If nohup.out cannot be
created or opened for appending, the output shall be appended to the
end of the file nohup.out in the directory specified by the HOME
environment variable. If neither file can be created or opened for
appending, utility shall not be invoked. If a file is created, the
file's permission bits shall be set to S_IRUSR | S_IWUSR.
My guess is that since "sh -c" doesn't start a login shell, it is inheriting the environment of the invoking shell, including the HOME environment variable, and is trying to open it there. So I would check the permissions of both your current directory and $HOME. You can try to touch test.txt in current directory or $HOME to see if you can perform that command.
As staticx writes, check the permissions of the directory (and the user) - and the executable.
Instead of using nohup:
check if nohup is needed at all, try ./startup </dev/null >mud.out 2>mud.err &, then close the terminal window and check if it is running
or just run ./startup in a screen session and detach it (<ctrl>+<a>,<d>)
In bash, I can use the script command, which dumps everything that shows on the shell to a file, including:
commands typed
PS1 line
stdout and stderr of commands
What is the equivalent in gdb?
I tried to run shell script from inside GDB, but after I hit return, I was in the shell and lost the shell prompt and could not run command any more. Moreover I could not use ctrl+c or ctrl+\ to exit. I needed to force kill the /bin/login tty2 to exit.
If you want to log GDB's output, you can use the GDB logging output commands, eg.
set logging file mylog.txt
set logging on
If you want to redirect your program's output to a file, you can use a redirect, eg.
run myprog > mylog.txt
see the chapter on program IO in the GDB manual for more information
Create a text file, i.e. gdbCommands.txt, with the following commands
set logging on my_log_file\nbt 10\nq
bt 10, indicates the number of lines (function calls) we need from the backtrace, in our example is 10 lines.
Execute gdb using the following command, assuming a core dump file core.2345
gdb -x gdbCommands.txt myApp core.2345
Open my_log_file and inspect backtrace!
howto-redirect-gdb-backtrace-output-to-a-text-file
I have logging enabled using:
set trace-commands on
set pagination off
set logging file $log
and show logging reports (to both to terminal and file):
+show logging
Currently logging to mylog.
Logs will be appended to the log file.
Output will be logged and displayed
If I print the value of a variable that also gets logged (to both to terminal and file):
+p myvar
$2 = 0
But if I do command like where or “info b” all I get logged to the file is:
+where
+info b
Anyone know why or how to fix it?
Have a look at the GDB documentation. Search for "Canned Sequences of Commands". There is a way to save GDB commands in a file and run them with the source command and you can use some GDB commands in these scripts to print information available to GDB (like echo, output and printf).
If you want that output in a file, use set logging file FILE.