Cannot delete file with special caracters under LINUX - delete-file

When trying to run a PHP script under Linux, my command fails and I inherited a a new file in the folder.
The file is called ");? ?for ($j=0;$j".
Impossible to delete with rm, impossible to move...Screenshot
Any idea, please ?

Just an untested idea :
Maybe you can try to delete all the repository, with rm -R folder_name.
You could also add -f : rm -R -f folder_name
Of course, don't forget to save the other files beforehand, but it should be easy as there are just a few.

Related

Bash script to change file extension using regex

I have a lot of files i've copied over from my iphone file system, to start with they were mp3 files, but app on iphone changed their names to some random staff which looks like:
1c03e04cc1bbfcb0c1237f57f1d0ae2e.mp3?extra=f7NhT68pNkmEbGA_I1WbVShXQ2E2gJAGBKSEyh3hf0hsbLB1cqnXDuepYA5ubcFm_B3KSsrXDuKVtWVAUh_MAPeFiEHXVdg
I only need to remove part of file name after mp3. Please give me a script - there are more than 600 files, and manually it is impossible.
you can use rename command:
rename "s/mp3\?.*/mp3/" *.mp3*
#!/bin/bash
shopt -s nullglob
for F in *.mp3\?*; do
echo mv -v -- "$F" "${F%%.mp3\?*}.mp3"
done
Save it to a script like script.sh then run as bash /path/to/script.sh in the directory where the files exist.
Remove echo when you find it correct already.

Upload a variable named file via CURL command line

I am trying to upload a file to a server using Curl command line tool. File which is uploaded is generated by a build tool so its name is variable and depends upon the version of the project.
For example:
File to be uploaded has name: puppet-14.1.6-snapshot.zip // where 14.1.6 is the version of project
Here is my (working) CURL command:
curl -u admin:admin -F file=#"target/puppet-14.1.6-snapshot.zip" -F name="puppet-core-pkg" -F force=true -F install=true http://myserver.com:4502/service.jsp
The above call works perfectly but I am trying to find an alternative via which i do not need to change the file parameter every time a new version of project goes out.
I have already tried these two
file=#"target/puppet-*-snapshot.zip" //Does not work
file=#"target/puppet-[*]-snapshot.zip" //Does not work
Is it possible to use some regex and upload the file which matched the given regex ?
I have found a temporary workaround for it but still not a convincing solution. Here it goes:
package_name=$(ls | grep puppet-)
curl -u admin:admin -F file=#"target/$package_name" -F name="puppet-core-pkg" -F force=true -F install=true http://myserver.com:4502/service.jsp
I know there is only one file which gets matched to puppet- So package_name contains the file name which is to be uploaded.
Although this works for me, I am leaving this question open for an elegant solution.
Is this the only file in the directory? If so, then you can just have a variable get the name of the file in the directory, and then do:
curl -u admin:admin -F file="$TheFile" # and so and so forth

Copy and Rename Multiple Files with Regular Expressions in bash

I've got a file structure that looks like:
A/
2098765.1ext
2098765.2ext
2098765.3ext
2098765.4ext
12345.1ext
12345.2ext
12345.3ext
12345.4ext
B/
2056789.1ext
2056789.2ext
2056789.3ext
2056789.4ext
54321.1ext
54321.2ext
54321.3ext
54321.4ext
I need to rename all the files that begin with 20 to start with 10; i.e., I need to rename B/2022222.1ext to B/1022222.1ext
I've seen many of the other questions regarding renaming multiple files, but couldn't seem to make it work for my case. Just to see if I can figure out what I'm doing before I actually try to do the copy/renaming I've done:
for file in "*/20?????.*"; do
echo "{$file/20/10}";
done
but all I get is
{*/20?????.*/20/10}
Can someone show me how to do this?
You just have a little bit of incorrect syntax is all:
for file in */20?????.*; do mv $file ${file/20/10}; done
Remove quotes from the argument to in. Otherwise, the filename expansion does not occur.
The $ in the substitution should go before the bracket
Here is a solution which use the find command:
find . -name '20*' | while read oldname; do echo mv "$oldname" "${oldname/20/10}"; done
This command does not actually do your bidding, it only prints out what should be done. Review the output and if you are happy, remove the echo command and run it for real.
Just wanna add to Explosion Pill's answer.
On OS X though, you must say
mv "${file}" "${file_expression}"
Or the mv command does not recognize it.
Brace expansions like :
{*/20?????.*/20/10}
can't be surrounded by quotes.
Instead, try doing (with Perl rename) :
rename 's/^10/^20/' */*.ext
You can do this using the Perl tool rename from the shell prompt. (There are other tools with the same name which may or may not be able to do this, so be careful.)
If you want to do a dry run to make sure you don't clobber any files, add the -n switch to the command.
note
If you run the following command (linux)
$ file $(readlink -f $(type -p rename))
and you have a result like
.../rename: Perl script, ASCII text executable
then this seems to be the right tool =)
This seems to be the default rename command on Ubuntu.
To make it the default on Debian and derivative like Ubuntu :
sudo update-alternatives --set rename /path/to/rename
The glob behavior of * is suppressed in double quotes. Try:
for file in */20?????.*; do
echo "${file/20/10}";
done

Is there a "watch" / "monitor" / "guard" program for Makefile dependencies?

I've recently been spoiled by using nodemon in a terminal window, to run my Node.js program whenever I save a change.
I would like to do something similar with some C++ code I have. My actual project has lots of source files, but if we assume the following example, I would like to run make automatically whenever I save a change to sample.dat, program.c or header.h.
test: program sample.dat
./program < sample.dat
program: program.c header.h
gcc program.c -o program
Is there an existing solution which does this?
(Without firing up an IDE. I know lots of IDEs can do a project rebuild when you change files.)
If you are on a platform that supports inotifywait (to my knowledge, only Linux; but since you asked about Make, it seems there's a good chance you're on Linux; for OS X, see this question), you can do something like this:
inotifywait --exclude '.*\.swp|.*\.o|.*~' --event MODIFY -q -m -r . |
while read
do make
done
Breaking that down:
inotifywait
Listen for file system events.
--exclude '.*\.swp|.*\.o|.*~'
Exclude files that end in .swp, .o or ~ (you'll probably want to add to this list).
--event MODIFY
When you find one print out the filepath of the file for which the event occurred.
-q
Do not print startup messages (so make is not prematurely invoked).
-m
Listen continuously.
-r .
Listen recursively on the current directory. Then it is piped into a simple loop which invokes make for every line read.
Tailor it to your needs. You may find inotifywait --help and the manpage helpful.
Here is a more detailed script. I haven't tested it much, so use with discernment. It is meant to keep the build from happening again and again needlessly, such as when switching branches in Git.
#!/bin/sh
datestampFormat="%Y%m%d%H%M%S"
lastrun=$(date +$datestampFormat)
inotifywait --exclude '.*\.swp|.*\.o|.*~' \
--event MODIFY \
--timefmt $datestampFormat \
--format %T \
-q -m -r . |
while read modified; do
if [ $modified -gt $lastrun ]; then
make
lastrun=$(date +$datestampFormat)
fi
done

Unix: fast 'remove directory' for cleaning up daily builds

Is there a faster way to remove a directory then simply submitting
rm -r -f *directory*
? I am asking this because our daily cross-platform builds are really huge (e.g. 4GB per build). So the harddisks on some of the machines are frequently running out of space.
This is namely the case for our AIX and Solaris platforms.
Maybe there are 'special' commands for directory remove on these platforms?
PASTE-EDIT (moved my own separate answer into the question):
I am generally wondering why 'rm -r -f' is so slow. Doesn't 'rm' just need to modify the '..' or '.' files to de-allocate filesystem entries.
something like
mv *directory* /dev/null
would be nice.
For deleting a directory from a filesystem, rm is your fastest option.
On linux, sometimes we do our builds (few GB) in a ramdisk, and it has a really impressive delete speed :) You could also try different filesystems, but on AIX/Solaris you may not have many options...
If your goal is to have the directory $dir empty now, you can rename it, and delete it later from a background/cron job:
mv "$dir" "$dir.old"
mkdir "$dir"
# later
rm -r -f "$dir.old"
Another trick is that you create a seperate filesystem for $dir, and when you want to delete it, you just simply re-create the filesystem. Something like this:
# initialization
mkfs.something /dev/device
mount /dev/device "$dir"
# when you want to delete it:
umount "$dir"
# re-init
mkfs.something /dev/device
mount /dev/device "$dir"
I forgot the source of this trick but it works:
EMPTYDIR=$(mktemp -d)
rsync -r --delete $EMPTYDIR/ dir_to_be_emptied/
On AIX at least, you should be using LVM, the logical volume manager. All our systems bundle all the physical hard drive into a single volume group and then create one big honkin' file system out of that.
That way, you can add physical devices to your machine at will and increase the size of your file system to whatever you need.
One other solution I've seen is to allocate a trash directory on each file system and use a combination of mv and a find cron job to tackle the space problem.
Basically, have a cron job that runs every ten minutes and executes:
rm -rf /trash/*
rm -rf /filesys1/trash/*
rm -rf /filesys2/trash/*
Then, when you want your specific directory on that file system recycled, use something like:
mv /filesys1/overnight /filesys1/trash/overnight
and, within the next ten minutes your disk space will start being recovered. The filesys1/overnight directory will immediately be available for use even before the trashed version has started being deleted.
It's important that the trash directory be on the same filesystem as the directory you want to get rid of, otherwise you have a massive copy/delete operation on your hands rather than a relatively quick move.
rm -r directory works by recursing depth-first down through directory, deleting files, and deleting the directories on the way back up. It has to, since you cannot delete a directory that is not empty.
Long, boring details: Each file system object is represented by an inode in the file system, which has file system-wide, flat array of inodes.[1] If you just deleted directory without first deleting its children then the children would remain allocated, but without any pointers to them. (fsck checks for that kind of thing when it runs, since it represents file system damage.)
[1] That may not be strictly true for every file system out there, and there may be a file system that works the way you describe. It would possibly require something like a garbage collector. However, all the common ones I know of act like fs objects are owned by inodes, and directories are lists of name/inode number pairs.
If rm -rf is slow, perhaps you are using a "sync" option or similar, which is writing to the disk too often. On Linux ext3 with normal options, rm -rf is very quick.
One option for fast removal which would work on Linux and presumably also on various Unixen is to use a loop device, something like:
hole temp.img $[5*1024*1024*1024] # create a 5Gb "hole" file
mkfs.ext3 temp.img
mkdir -p mnt-temp
sudo mount temp.img mnt-temp -o loop
The "hole" program is one I wrote myself to create a large empty file using a "hole" rather than allocated blocks on the disk, which is much faster and doesn't use any disk space until you really need it. http://sam.nipl.net/coding/c-examples/hole.c
I just noticed that GNU coreutils contains a similar program "truncate", so if you have that you can use this to create the image:
truncate --size=$[5*1024*1024*1024] temp.img
Now you can use the mounted image under mnt-temp for temporary storage, for your build. When you are done with it, do this to remove it:
sudo umount mnt-temp
rm test.img
rmdir mnt-temp
I think you will find that removing a single large file is much quicker than removing lots of little files!
If you don't care to compile my "hole.c" program, you can use dd, but this is much slower:
dd if=/dev/zero of=temp.img bs=1024 count=$[5*1024*1024] # create a 5Gb allocated file
I think that actually there is nothing else than "rm -rf" as you quoted to delete your directories.
to avoid doing it manually over and over you can cron daily a script that recursively deletes all the build directories of your build root directory if they're "old enough" with something like :
find <buildRootDir>/* -prune -mtime +4 -exec rm -rf {} \;
(here mtime +4 indicates "any file older than 4 days)
Another way would be to configure your builder (if it allows such things) to crush the previous build with the current one.
I was looking into this as well.
I had a dir with 600,000+ files.
rm * would fail, because there are too many entries.
find . -exec rm {} \; was nice, and deleting ~750 files every 5 seconds. Was checking the rm rate via another shell.
So, instead I wrote a short script to rm many files at once. Which obtained about ~1000 files every 5 seconds. The idea is to put as many files into 1 rm command as you can to increase the efficiency.
#!/usr/bin/ksh
string="";
count=0;
for i in $(cat filelist);do
string="$string $i";
count=$(($count + 1));
if [[ $count -eq 40 ]];then
count=1;
rm $string
string="";
fi
done
On Solaris, this is the fastest way I have found.
find /dir/to/clean -type f|xargs rm
If you have files with odd paths, use
find /dir/to/clean -type f|while read line; do echo "$line";done|xargs rm
Use
perl -e 'for(<*>){((stat)[9]<(unlink))}'
Please refer below link:
http://www.slashroot.in/which-is-the-fastest-method-to-delete-files-in-linux
Needed to delete 700 Gbytes from dozens of directories on AWS EBS 1 TB disk (ext3) before copying remainder to a new 200 Gbyte XFS volume. It was taking hours leaving that volume at 100%wa. Since the disk IO and server time are not free, this took only a fraction of a second per directory.
where /dev/sdb
is an empty volume of any size
directory_to_delete=/ebs/var/tmp/
mount /dev/sdb $directory_to_delete
nohup rsync -avh /ebs/ /ebs2/
I coded a small Java application RdPro (Recursive Directory Purge tool) which is faster than rm. It also can remove target directories user specified under a root.Works for both Linux/Unix and Windows. It has both a command line version and a GUI version.
https://github.com/mhisoft/rdpro
I had to delete more than 3,00,000 files in windows. I had cygwin installed. Luckily i had all the primary directory in a database. Created a for loop and based on line entry and delete using rm -rf
I just use find ./ -delete in the folder to empty, and it has deleted 620000 directories (total size) 100GB in arround 10 minutes.
Source : a comment in this site https://www.slashroot.in/comment/1286#comment-1286