On the docker image debian:stretch-slim, couldn't delete a specific folder on a NFS drive, using rm -rf /folder-name as root (or rm-rf * after entering the folder-name).
Got the following error back:
rm: cannot remove 'test-ikmgfjhv/dev/.nfse47cf31c6b1dd52500000009': Device or resource busy
After a lot of searching, eventually got to the following link:
https://uisapp2.iu.edu/confluence-prd/pages/viewpage.action?pageId=123962105
Which Describes exactly why those files exist in NFS and how to handle them.
As I wasn't using the same machine the process runs on (another container), so in my case, I had to work around that and first make sure the process using the file is being killed on the first machine, then try to delete it on the second one, according to the project's needs.
It is possible that the .nfs file is attached to a process that is busy or running (like an open file, for example, a vim file).
For example, if the hidden file is .nfs000000000189806400000085, run this command to get the pid:
lsof .nfs000000000189806400000085
this will output the PID and other info related to that file
then kill the process:
kill - 9
Be aware that if the file was not saved you will lose the information.
While running any command if you get error like :
/home/mmandi/testcases/.nfs000000e75853 :device or resource busy.
Go to the directory where this file is being shown.
For e.g - In this case : /home/mmandi/testcases/
Do following :
# ls -la : This will display contents of the directory along with files starting with "."
Here it displays the .nfs000000e7585 file.
# lsof .nfs000000e7585
This will list down the PID
# Use Kill -9 PID.
Related
Under some circumstances, the rm command in Git-Bash deletes files that can't be deleted in explorer, cmd prompt, PowerShell, or using C++ standard library calls.
Why?
This is perplexing to me because I know there is no magic here and I assume that they are all using the same Win32 API.
For example, I have database snapshots that remain open and cannot be deleted using the other methods described, but are successfully deleted by Git-Bash rm:
Explorer delete: "The action cannot be completed because the file is open."
cmd: del <path> : "Access is denied"
PS: Remove-Item -Force -Path <path> : "Cannot remove item. Access to the path is denied."
C++ remove() : returns -1
C++ unlink() : returns -1
C++ _unlink() : returns -1
git-bash rm <path> : success
The above can be performed repeatedly on different files.
There are other locked files that git-bash rm deletes successfully as well (I have used it in the past, not recently and I don't have other specific examples).
However it doesn't always work: In a test application I opened a text file using fopen() and none of the methods, including Git-Bash rm, could successfully delete it.
So, how does Git-Bash rm work?
I was able to figure out how this works.
Interestingly, I used Git-Bash's strace utility on it's rm command.
It turns out that Git Bash uses CygWin, and the delete routine is found in the CygWin syscalls.cc file which tries to delete a file in a few different ways.
Eventually it tries to move the file to the Recycle Bin, where it deletes the locked file by opening it with the Windows Driver call NtOpenFile() with a FILE_DELETE_ON_CLOSE flag, then closing it using NtClose().
Not sure if it would be proper to copy CygWin's code into the response here, but details on all of the above can be found in the link provided.
I cannot seem to find a way to configure my abrt event to copy the coredump to a custom location. The reason I want to do this is to prevent abrt from pruning my coredumps if the crash directory exceeds MaxCrashReportsSize. With the prerequisite that I have no control over how abrt is configured I would like to export the coredump to a support directory as soon as it is created.
EVENT=post-create pkg_name=raptorio analyzer=CCpp
test -f coredump && { mkdir -p /opt/raptorio/cores; cp -f coredump /opt/raptorio/cores/$(basename `cat executable`).core; }
This event will save one coredump for each C/C++ binary from my raptorio RPM package. When my program crashes abrt prints the following errors in the syslog:
Aug 30 08:28:41 abrtd: mkdir: cannot create directory `/opt/raptorio/cores': Permission denied
Aug 30 08:28:41 abrtd: cp: cannot create regular file `/opt/raptorio/cores/raptord.core': No such file or directory
Aug 30 08:28:41 abrtd: 'post-create' on '/var/spool/abrt/ccpp-2016-08-30-08:28:10-31213' exited with 1
I see that the abrt event runs as root:root but it is jailed somehow, possibly due to SELinux? I am using abrt 2.0.8 on centos 6.
/opt is not the right place to keep transient files. cores should go in /var/raptorio/cores, perhaps. See the Filesystem Hierarchy Standard
Assuming your program runs as user 'nobody', make sure 'nobody' has write permissions on that directory, and you should be all set.
I'm trying to build B2G for alcatel one touch fire.
After cloning B2G I ran BRANCH=v2.0 ./config.sh hamachi and then on running ./build.sh I get the following:
Pulling "libOmxWmaDec.so" cp: cannot stat ‘../../../backup-hamachi/system/lib/libOmxWmaDec.so’: No such file or directory Failed to pull libOmxWmvDec.so. Giving up.
Build failed! <
Build with |./build.sh -j1| for better messages If all else fails, use |rm -rf objdir-gecko| to clobber gecko and |rm -rf out| to clobber everything else.
That is error is because you have not the proper blobs to start building the images. Those blobs are stored in ‘../../../backup-hamachi/’ and they are automatically pulled from your device.
When you try to build the image, the build.sh script automatically backups your system and pulls every lib; it will use them to build your next build.
My suggestions:
Test your USB cable, If it is too loose, it may be interrupting the backup.
Open an ADB Shell to check if "something" is closing your ADB server meanwhile.
Check if that lib exists in your device, use the ADB shell and find it.
If it exists, you can always do a backup manually using Busybox: https://wiki.mozilla.org/File:Busybox-b2g.tar.gz
If it does not exist, find a proper backup for your Hamachi.
Check the following bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1063917
I successfully compiled a MUD source code, and it says in the instructions to start up the server using
nohup ./startup &
although when I do this it gives me this error:
$ nohup: ignoring input and appending output to `nohup.out'
nohup: failed to run command `./startup': Permission denied
I have looked all over the internet to find the answer. A few of them said to put my cygwin directory in the root folder (I am using windows 7) and its directory is C:\cygwin
so thats not a problem.. Can anyone help me with this please??
Try chmod +x startup, maybe your startup file is not executable.
From "man nohup":
If the standard output is a terminal, all output written by the named
utility to its standard output shall be appended to the end of the
file nohup.out in the current directory. If nohup.out cannot be
created or opened for appending, the output shall be appended to the
end of the file nohup.out in the directory specified by the HOME
environment variable. If neither file can be created or opened for
appending, utility shall not be invoked. If a file is created, the
file's permission bits shall be set to S_IRUSR | S_IWUSR.
My guess is that since "sh -c" doesn't start a login shell, it is inheriting the environment of the invoking shell, including the HOME environment variable, and is trying to open it there. So I would check the permissions of both your current directory and $HOME. You can try to touch test.txt in current directory or $HOME to see if you can perform that command.
As staticx writes, check the permissions of the directory (and the user) - and the executable.
Instead of using nohup:
check if nohup is needed at all, try ./startup </dev/null >mud.out 2>mud.err &, then close the terminal window and check if it is running
or just run ./startup in a screen session and detach it (<ctrl>+<a>,<d>)
I am using C++ on Ubuntu. I have been using the command:
system("mkdir new_folder");
to make a new folder called new_folder. However, if that folder already exists, C++ outputs an error message (and continues to run afterwards).
Is there a way to stop the error message from printing out?
For this particular command use mkdir -p new_folder.
Generally, you want to fork your process and on one of the branches redirect stdout and stderr to /dev/null or similar then do exec to replace the process with the new one.