gdb: howto list open files - gdb

I am wondering if it might be possible to get a list of files/directories that the debugged application has opened but not closed from GDB itself ?
Currently I set a breakpoint and then I use an external program like lsof to check for opened files.
But this approach is really annoying.
Environment: Debian-Lenny with gdb v6.8
EDIT: I am asking because my application is leaking file handles in some situations

On Linux you can also just look in /proc/<pid>/fd. To do that from GDB (e.g. if you want to attach it to a breakpoint) is pretty simple. Or of course you can just use lsof, too.
(gdb) info proc
process 5262
cmdline = '/bin/ls'
cwd = '/afs/acm.uiuc.edu/user/njriley'
exe = '/bin/ls'
(gdb) shell ls -l /proc/5262/fd
total 0
lrwx------ 1 njriley users 64 Feb 9 12:45 0 -> /dev/pts/14
lrwx------ 1 njriley users 64 Feb 9 12:45 1 -> /dev/pts/14
lrwx------ 1 njriley users 64 Feb 9 12:45 2 -> /dev/pts/14
lr-x------ 1 njriley users 64 Feb 9 12:45 3 -> pipe:[62083274]
l-wx------ 1 njriley users 64 Feb 9 12:45 4 -> pipe:[62083274]
lr-x------ 1 njriley users 64 Feb 9 12:45 5 -> /bin/ls
(gdb) shell lsof -p 5262
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
ls 5262 njriley cwd DIR 0,18 14336 262358 /afs/acm.uiuc.edu/user/njriley
ls 5262 njriley rtd DIR 8,5 4096 2 /
ls 5262 njriley txt REG 8,5 92312 8255 /bin/ls
ls 5262 njriley mem REG 8,5 14744 441594 /lib/libattr.so.1.1.0
ls 5262 njriley mem REG 8,5 9680 450321 /lib/i686/cmov/libdl-2.7.so
ls 5262 njriley mem REG 8,5 116414 450307 /lib/i686/cmov/libpthread-2.7.so
ls 5262 njriley mem REG 8,5 1413540 450331 /lib/i686/cmov/libc-2.7.so
ls 5262 njriley mem REG 8,5 24800 441511 /lib/libacl.so.1.1.0
ls 5262 njriley mem REG 8,5 95964 441580 /lib/libselinux.so.1
ls 5262 njriley mem REG 8,5 30624 450337 /lib/i686/cmov/librt-2.7.so
ls 5262 njriley mem REG 8,5 113248 441966 /lib/ld-2.7.so
ls 5262 njriley 0u CHR 136,14 16 /dev/pts/14
ls 5262 njriley 1u CHR 136,14 16 /dev/pts/14
ls 5262 njriley 2u CHR 136,14 16 /dev/pts/14
ls 5262 njriley 3r FIFO 0,6 62083274 pipe
ls 5262 njriley 4w FIFO 0,6 62083274 pipe
ls 5262 njriley 5r REG 8,5 92312 8255 /bin/ls

due to the help of Nicholas I was able to fully automate the task by defining a macro.
.gdbinit:
define lsof
shell rm -f pidfile
set logging file pidfile
set logging on
info proc
set logging off
shell lsof -p `cat pidfile | perl -n -e 'print $1 if /process (.+)/'`
end
document lsof
List open files
end
here is a session using the new macro (the program opens a file in the /tmp directory):
file hello
break main
run
next
lsof
output:
...
hello 2683 voku 5r REG 8,1 37357 11110 /home/voku/hello
hello 2683 voku 6w REG 8,1 0 3358 /tmp/testfile.txt
...

If lsof is not available on your system(I had such problem), you can use gdb info os files. It print info about open files for all processes.

No, but you can run lsof and filter down to the debugged process.

Related

When attemping to copy a dir from repo to container in workflow, it's as if COPY command did nothing

My repo:
/
dbt-action/
action.yml
Dockerfile
entrypoint.sh
dbt/
profiles.yml
My workflow step:
- name: Run DBT
uses: ./dbt-action
My Dockerfile:
FROM ghcr.io/dbt-labs/dbt-redshift:1.3.latest
COPY dbt .dbt
COPY entrypoint.sh /entrypoint.sh
My entrypoint:
!/bin/bash
pwd
ls -la
Outputs the following:
drwxr-xr-x 6 1001 123 4096 Jan 7 13:06 .
drwxr-xr-x 6 root root 4096 Jan 7 13:06 ..
drwxr-xr-x 8 1001 123 4096 Jan 7 13:06 .git
drwxr-xr-x 3 1001 123 4096 Jan 7 13:06 .github
drwxr-xr-x 3 1001 123 4096 Jan 7 13:06 blah
-rw-r--r-- 1 1001 123 1744 Jan 7 13:06 README.md
drwxr-xr-x 3 1001 123 4096 Jan 7 13:06 dbt-action
Expected output:
Same as above but with additional directory .dbt coming from COPY dbt .dbt in my Dockerfile.
Why don't I see dir .dbt when I ls -la in my entrypoint?
Seems like you are executing your ‘Docker build’ from the wrong working directory, since the ‘dbt-action’ folder is present, but not it contents. Can you double check the PWD before you build?

Privileges problem UMASK Nautilus. CentOS 7.7

I have a problem with the privileges of [files/folders] created by Nautilus. As a test I set the UMASK of my specific user to 0000:
[simone#MYPC:~] >cat /mnt/home/simone/.bashrc | grep mask
# User Umask Override
umask 0000
[simone#MYPC:~] >cat /mnt/home/simone/.bashrc
# $HOME/.bashrc
umask 0000
[simone#MYPC:~] >umask
000
When I write a file or folder passing from terminal I can do it with the required privileges:
[simone#MYPC:~/Desktop] >touch file1.txt
[simone#MYPC:~/Desktop] >mkdir Folder1
[simone#MYPC:~/Desktop] >ls -la
-rw-rw-rw-. 1 simone home 0 Mar 21 2022 file1.txt
drwxrwxrwx. 2 simone home 4096 Mar 21 2022 Folder1
But when I create a file (TextEditor) or folder through Nautilus, I get different privileges:
[simone#MYPC:~] >ls -lart /tmp
drwxr-xr-x. 1 simone home 0 Mar 18 10:04 FolderByNautilus
[simone#MYPC:~] > ls -la /tmp/fileNautilus.txt
-rw-r--r--. 1 simone home 5 Mar 11 2022 /tmp/fileNautilus.txt
I would like Nautilus or text editor to write with the mask 0000 leaving the same privileges with which my terminal writes

GCP's SSH terminal not working after stopping and starting vm instance

I am using gcp vm machine instance N1-standard 8V-30GB and N1-standard 4V-15GB
os-Debian
version - Debian GNU/Linux 10(buster)
this issue i am facing from last 1 month.
public access permission denied is one of message i am seeing while trying to access from cloud shell
I had run command chmod 777 <home directory> earlier.
I've tried to reproduce your steps and was able to solve this issue.
Please have a look at my steps below:
create VM instances:
gcloud compute instances create instance-1 --zone=europe-west3-a --machine-type=e2-medium --image=ubuntu-1804-bionic-v20200701 --image-project=ubuntu-os-cloud
gcloud compute instances create instance-2 --zone=europe-west3-a --machine-type=e2-medium --image=ubuntu-1804-bionic-v20200701 --image-project=ubuntu-os-cloud
change permissions recursively on my home directory at the VM instance instance-1:
instance-1:~$ chmod -R 777 ~
instance-1:~$ ls -la
...
drwxrwxrwx 2 username username 4096 Jul 15 07:50 .ssh
create snapshot of the VM instance instance-1 boot disk:
gcloud compute disks snapshot instance-1 --snapshot-names instance-1-snapshot --zone=europe-west3-a
create a new disk with the snapshot:
gcloud compute disks create instance-1-snapshot-disk --zone=europe-west3-a --source-snapshot=instance-1-snapshot
attach created disk instance-1-snapshot-disk to the VM instance instance-2:
instance-2:~$ ls -l /dev/ | grep sd
brw-rw---- 1 root disk 8, 0 Jul 15 07:39 sda
brw-rw---- 1 root disk 8, 1 Jul 15 07:39 sda1
brw-rw---- 1 root disk 8, 14 Jul 15 07:39 sda14
brw-rw---- 1 root disk 8, 15 Jul 15 07:39 sda15
instance-2:~$ mount | grep sda
/dev/sda1 on / type ext4 (rw,relatime)
/dev/sda15 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
then
gcloud compute instances attach-disk instance-2 --disk=instance-1-snapshot-disk --zone=europe-west3-a
after that
instance-2:~$ ls -l /dev/ | grep sd
brw-rw---- 1 root disk 8, 0 Jul 15 07:39 sda
brw-rw---- 1 root disk 8, 1 Jul 15 07:39 sda1
brw-rw---- 1 root disk 8, 14 Jul 15 07:39 sda14
brw-rw---- 1 root disk 8, 15 Jul 15 07:39 sda15
brw-rw---- 1 root disk 8, 16 Jul 15 08:04 sdb
brw-rw---- 1 root disk 8, 17 Jul 15 08:04 sdb1
brw-rw---- 1 root disk 8, 30 Jul 15 08:04 sdb14
brw-rw---- 1 root disk 8, 31 Jul 15 08:04 sdb15
instance-2:~$ sudo mkdir /mnt/instance-1-snapshot-disk
instance-2:~$ sudo mount /dev/sdb1 /mnt/instance-1-snapshot-disk
instance-2:~$ ls -la /mnt/instance-1-snapshot-disk
total 104
drwxr-xr-x 23 root root 4096 Jul 15 07:56 .
drwxr-xr-x 3 root root 4096 Jul 15 08:05 ..
drwxr-xr-x 2 root root 4096 Jul 1 19:14 bin
drwxr-xr-x 4 root root 4096 Jul 1 19:19 boot
drwxr-xr-x 4 root root 4096 Jul 1 19:11 dev
drwxr-xr-x 93 root root 4096 Jul 15 07:55 etc
drwxr-xr-x 4 root root 4096 Jul 15 07:50 home
lrwxrwxrwx 1 root root 30 Jul 1 19:18 initrd.img -> boot/initrd.img-5.3.0-1030-gcp
lrwxrwxrwx 1 root root 30 Jul 1 19:18 initrd.img.old -> boot/initrd.img-5.3.0-1030-gcp
drwxr-xr-x 22 root root 4096 Jul 1 19:17 lib
drwxr-xr-x 2 root root 4096 Jul 1 19:01 lib64
drwx------ 2 root root 16384 Jul 1 19:13 lost+found
drwxr-xr-x 2 root root 4096 Jul 1 19:01 media
drwxr-xr-x 2 root root 4096 Jul 1 19:01 mnt
drwxr-xr-x 2 root root 4096 Jul 1 19:01 opt
drwxr-xr-x 2 root root 4096 Apr 24 2018 proc
drwx------ 3 root root 4096 Jul 15 07:36 root
drwxr-xr-x 4 root root 4096 Jul 1 19:19 run
drwxr-xr-x 2 root root 4096 Jul 1 19:17 sbin
drwxr-xr-x 6 root root 4096 Jul 15 07:36 snap
drwxr-xr-x 2 root root 4096 Jul 1 19:01 srv
drwxr-xr-x 2 root root 4096 Apr 24 2018 sys
drwxrwxrwt 7 root root 4096 Jul 15 07:56 tmp
drwxr-xr-x 10 root root 4096 Jul 1 19:01 usr
drwxr-xr-x 13 root root 4096 Jul 1 19:12 var
lrwxrwxrwx 1 root root 27 Jul 1 19:18 vmlinuz -> boot/vmlinuz-5.3.0-1030-gcp
lrwxrwxrwx 1 root root 27 Jul 1 19:18 vmlinuz.old -> boot/vmlinuz-5.3.0-1030-gcp
change permissions:
.ssh directory: 700 drwx------
public key (.pub file): 644 -rw-r--r--
private key (id_rsa): 600 -rw-------
lastly your home directory should not be writeable by the group or others: 755 drwxr-xr-x
instance-2:~$ chmod -R 755 /mnt/instance-1-snapshot-disk/home/username/
instance-2:~$ chmod -R 700 /mnt/instance-1-snapshot-disk/home/username/.ssh/
instance-2:~$ chmod 644 /mnt/instance-1-snapshot-disk/home/username/.ssh/authorized_keys
unmount the disk when you finish:
instance-2:~$ sudo umount /mnt/instance-1-snapshot-disk/
detach disk instance-1-snapshot-disk from the VM instance instance-2:
gcloud compute instances detach-disk instance-2 --disk=instance-1-snapshot-disk --zone=europe-west3-a
create a new instance from the repaired disk:
gcloud compute instances create instance-3 --zone=europe-west3-a --machine-type=e2-medium --disk=name=instance-1-snapshot-disk
check SSH connection to at the VM instance instance-1.
In addition, please have a look at the documentation Troubleshooting SSH section Inspect the VM instance without shutting it down to find more details.
From owner's account i tried to access instance-1 but owner is also not able to connect to the instance-1.
owner of project got this pop-up on ssh window
[1]: https://i.stack.imgur.com/y2fzC.jpg
I observe that in fresh new created instance if i add add some file like git clone repo, after that if i restart it then i am able to connect SSH again.

Bash, need to change word order in multiple directories

I have a considerable classical FLAC collection where each album is a directory. I've realized that I have used a sub-optimal structure and need to rename all the directories.
My current naming convention is:
COMPOSER (CONDUCTOR) - NAME OF PIECE
E.g.
"Bach (Celibidache) - Mass in F minor"
I want to change the naming to
COMPOSER - NAME OF PIECE (CONDUCTOR)
I.e.
"Bach - Mass in F minor (Celibidache)"
There are some possible exceptions, the (CONDUCTOR) may be (CONDUCTOR, SOLOIST) and some directories do not have the (CONDUCTOR) part and should be left as is. The NAME OF PIECE can contain all legal letters and symbols.
All albums are located in the same parent directory, so no sub-directories.
What is the easy way to do this?
use perl rename (some distributions have this as rename - Ubuntu and related, some as prename - Fedora and Redhat AFAIK). Check first.
prename -n -- '-d && s/(\(.*\)) - (.*)/- \2 \1/' *
-n don't rename just print the results - remove after you are ok with the results.
-- end of the options, start of the perlexpr and files
-d check that the file is a directory
s/.../.../ - substitution
Example:
[test01#localhost composers]$ ls -la
total 12
drwxrwxr-x 3 test01 test01 4096 Feb 14 12:37 .
drwxrwxr-x. 7 test01 test01 4096 Feb 14 12:23 ..
drwxrwxr-x 2 test01 test01 4096 Feb 14 12:37 'Bach (Celibidache) - Mass in F minor'
-rw-rw-r-- 1 test01 test01 0 Feb 14 12:27 'Bach (Celibidache) - Mass in F minor.flac'
[test01#localhost composers]$ prename -n -- '-d && s/(\(.*\)) - (.*)/- \2 \1/' *
Bach (Celibidache) - Mass in F minor -> Bach - Mass in F minor (Celibidache)
[test01#localhost composers]$ prename -- '-d && s/(\(.*\)) - (.*)/- \2 \1/' *
[test01#localhost composers]$ ls -la
total 12
drwxrwxr-x 3 test01 test01 4096 Feb 14 12:38 .
drwxrwxr-x. 7 test01 test01 4096 Feb 14 12:23 ..
-rw-rw-r-- 1 test01 test01 0 Feb 14 12:27 'Bach (Celibidache) - Mass in F minor.flac'
drwxrwxr-x 2 test01 test01 4096 Feb 14 12:37 'Bach - Mass in F minor (Celibidache)'
Note that without -d both the file and the directory would have been renamed.

URI to access a file in HDFS

I have setup a cluster using Ambari that includes 3 nodes .
Now I want to access a file in a HDFS using my client application.
I can find all node URIs under Data Nodes in Amabari.
What is the URI + Port I need to use to access a file ? I have used the default installation process.
Default port is "8020".
You can access the "hdfs" paths in 3 different ways.
Simply use "/" as the root path
For e.g.
E:\HadoopTests\target>hadoop fs -ls /
Found 6 items
drwxrwxrwt - hadoop hdfs 0 2015-08-17 18:43 /app-logs
drwxr-xr-x - mballur hdfs 0 2015-11-24 15:36 /tmp
drwxrwxr-x - mballur hdfs 0 2015-10-20 15:27 /user
Use "hdfs:///"
For e.g.
E:\HadoopTests\target>hadoop fs -ls hdfs:///
Found 6 items
drwxrwxrwt - hadoop hdfs 0 2015-08-17 18:43 hdfs:///app-logs
drwxr-xr-x - mballur hdfs 0 2015-11-24 15:36 hdfs:///tmp
drwxrwxr-x - mballur hdfs 0 2015-10-20 15:27 hdfs:///user
Use "hdfs://{NameNodeHost}:8020/"
For e.g.
E:\HadoopTests\target>hadoop fs -ls hdfs://MBALLUR:8020/
Found 6 items
drwxrwxrwt - hadoop hdfs 0 2015-08-17 18:43 hdfs://MBALLUR:8020/app-logs
drwxr-xr-x - mballur hdfs 0 2015-11-24 15:36 hdfs://MBALLUR:8020/tmp
drwxrwxr-x - mballur hdfs 0 2015-10-20 15:27 hdfs://MBALLUR:8020/user
In this case, "MBALLUR" is the name of my Name Node host.