I wanted to continu a project I haven't touched in a while, and came across this error when executing
php bin/console doctrine:generate:entites SalonBundle (it's from shell, so it use PHP CLI)
Generating entities for bundle "SalonBundle"
> backing up Salon.php to Salon.php~
> generating SalonBundle\Entity\Salon
[Symfony\Component\Debug\Exception\ContextErrorException]
Warning: chmod(): Operation not permitted
doctrine:generate:entities [--path PATH] [--no-backup] [-h|--help] [-q|--quiet] [-v|vv|vvv|--verbose] [-V|--version] [--ansi] [--no-ansi] [-n|--no-interaction] [-e|--env ENV] [--no-debug] [--] <command> <name>
To begin with, I'm not sure why Symfony try a chmod
All files are owned by root:www-data
File permissions are rw-rw-r--
My user is in group www-data
upload, creating file, copy, move, etc works fine
The permissions are set via a script which run the following commands
$targetDiris the path passed as argument to the script.
chown -R root:www-data $targerDir
find $targerDir -type d -exec chmod ug+rwx "{}" \;
find $targerDir -type f -exec chmod ug+rw "{}" \;
find $targerDir -type d -exec chmod g+s "{}" \;
find $targerDir -type d -exec setfacl -m g:www-data:rwx,d:g:www-data:rwx "{}" \;
find $targerDir -type f -exec setfacl -m g:www-data:rw- "{}" \;
Just added -vvv to the command line as suggested by someone and got this:
Exception trace:
() at /var/www/3DH/salon/vendor/doctrine/orm/lib/Doctrine/ORM/Tools/EntityGenerator.php:392
Symfony\Component\Debug\ErrorHandler->handleError() at n/a:n/a
chmod() at /var/www/3DH/salon/vendor/doctrine/orm/lib/Doctrine/ORM/Tools/EntityGenerator.php:392
Doctrine\ORM\Tools\EntityGenerator->writeEntityClass() at /var/www/3DH/salon/vendor/doctrine/orm/lib/Doctrine/ORM/Tools/EntityGenerator.php:347
Doctrine\ORM\Tools\EntityGenerator->generate() at /var/www/3DH/salon/vendor/doctrine/doctrine-bundle/Command/GenerateEntitiesDoctrineCommand.php:133
Doctrine\Bundle\DoctrineBundle\Command\GenerateEntitiesDoctrineCommand->execute() at /var/www/3DH/salon/vendor/symfony/symfony/src/Symfony/Component/Console/Command/Command.php:256
Symfony\Component\Console\Command\Command->run() at /var/www/3DH/salon/vendor/symfony/symfony/src/Symfony/Component/Console/Application.php:837
Symfony\Component\Console\Application->doRunCommand() at /var/www/3DH/salon/vendor/symfony/symfony/src/Symfony/Component/Console/Application.php:187
Symfony\Component\Console\Application->doRun() at /var/www/3DH/salon/vendor/symfony/symfony/src/Symfony/Bundle/FrameworkBundle/Console/Application.php:80
Symfony\Bundle\FrameworkBundle\Console\Application->doRun() at /var/www/3DH/salon/vendor/symfony/symfony/src/Symfony/Component/Console/Application.php:118
Symfony\Component\Console\Application->run() at /var/www/3DH/salon/bin/console:27
This topic provide no solutions
This solution doesn't apply to me as I don't use vagrant (not installed)
Well, I understood the problem.
I would say I'm stupid in the first place I guess...
#john Smith: You weren't far off...
In fact, my problem is just logic...
Trying to chmod a file owned by root with a regular user is impossible...
And that's normal
Every files generated in the src/ folder are made from the files(lets call them tools) in /vendor/doctrine/orm/lib/Doctrine/ORM/Tools/
And of course files are owned by the user who user these tools (twig, entity, form, etc)
When generating a file, these tools try to chmod the new file in case we were dumb enough to not set right our folders & files permissions.
In my case, as I did a pass of my webperms script, all files were owner by root:www-data.
So of course, when I try to generate entities, it would fail because I wasn't the file owner anymore.
There are currently two solutions to this problem:
Change src/ files ownership to user:www-data
Comment the chmodcommand in the tools located in /vendor/doctrine/orm/lib/Doctrine/ORM/Tools/
I'm not sure how to contact the symfony team, but I would suggest to add an if condition, which would depend of a symfony parameter and allow us to bypass this chmod
Related
I'm trying to remove all of the folder meta files from a unity project in the git repo my team is using. Other members don't delete the meta file associated to the folder they deleted/emptied and it's propagating to everyone else. It's a minor annoyance that shouldn't need to be seen so I've added this to the .gitignore:
*.meta
!*.*.meta
and now need to remove only the folder metas. I'd rather remove the metas now than wait for them to appear and have git remove them later. I'm using git bash on Windows and have tried the following commands to find just the folder metas:
find . -name '*.meta' > test.txt #returns folders and files
find . -regex '.*\.meta' > test.txt #again folders and files
find . -regex '\.[^\.]{0,}\.meta' > test.txt #nothing
find . -regex '\.[^.]{0,}\.meta' > test.txt #nothing
find . -regex '\.{2}' > test.txt #nothing
find . -regex '(\..*){2}' > test.txt #nothing
I know regex is interpreted differently per program/language but the following will produce the results I want in Notepad++ and I'm not sure how to translate it for git or git bash:
^.*/[^.]{0,}\.meta$
by capturing the lines (file paths from root of repo) that end with a /<foldername>.meta since I realized some folders contained a '.' in their name.
Once this is figured out I need to go line by line and git rm the files.
NOTE
I can also run:
^.*/.*?\..*?\.meta$\n
and replace with nothing to delete all of the file metas from the folders and files result, and use that result to get all of the folder metas, but I'd also like to know how to avoid needing Notepad++ as an extra step.
To confine the results only to indexed files use git ls-files, the swiss-army knife of index-aware file listing. git update-index is the core-command index munger,
git ls-files -i -x '*.meta' -x '!*.*.meta' | git update-index --force-remove --stdin
which will remove the files from your index but leave them in the work tree.
It's easier to express with two conditions just like in .gitignore. Match *.meta but exclude *.*.meta:
find . -name '*.meta' ! -name '*.*.meta'
Use -exec to run the command of your choice on the matched files. {} is a placeholder for the file names and ';' signifies the end of the -exec command (weird syntax but it's useful if you append other things after the -exec ... ';').
find . -name '*.meta' ! -name '*.*.meta' -exec git rm {} ';'
I have a CentOS server running WHM/cPanel sites, and a lot of these sites run WordPress. I would like to do an automated backup that zips and moves any account containing an 'uploads' folder into a backup directory. I know there are other solutions, but most want you to backup the entire WordPress site, I only need to backup the uploads however.
I'm not very good with .sh scripts and have spent a lot of time already trying to figure this out, but I can't seem to find any examples similar enough for me to be successful. The main problem I have is naming the zip after the account.
Location of most upload folders (user1 being the account that changes):
/home/user1/public_html/wp-content/uploads
/home/user2/public_html/wp-content/uploads
/home/user3/public_html/wp-content/uploads
Script example:
find /home/ -type d -name uploads -exec sh -c 'zip -r /backup/uploads/alluploads.zip `dirname $0`/`basename $0`' {} \;
The problem with this approach is that it all writes to one single zip file. How can I alter this to save each users uploads to there own user1-uploads.zip file?
I've played around with exec and sed but I can't seem to figure it out. This is the best I got - just trying to get it to echo the name - but it's not right. Sorry, I'm terrible with regex:
find /home/ -type d -name uploads -exec sh -c 'string=`dirname $0` echo $string | sed `/\/home\/\(.*\)\/new\/wp-content\/uploads/`'{} \;
Would appreciate any help or directions on how to fix this. Thanks!
You can use Bash's globbing with * to expand all the different user directories.
for uploads in /home/*/public_html/wp-content/uploads; do
IFS=/ read -r _ _ user _ <<<"$uploads"
zip -r /backup/uploads/${user}.zip "$uploads"
done
A couple of solutions come to mind, you could loop through the user directories:
cd /home
for udir in *; do
find /home/$udir -type d -name uploads -exec sh -c 'zip -r /backup/uploads/'"$udir"'-uploads.zip `dirname $0`/`basename $0`' {} \;
done
or use cut to get the second element of the path:
find /home/ -type d -name uploads -exec sh -c 'zip -r /backup/uploads/`echo "$0" | cut -d/ -f 2`-uploads.zip `dirname $0`/`basename $0`' {} \;
Although both of these would run into issues if the user has more than 1 directory anywhere under their home that is named uploads
You might use tr to simply replace the path separator with another character so you end up with a separate zip for each uploads directory:
find /home/ -type d -name uploads -exec sh -c 'zip -r /backup/uploads/`echo $0 | tr "/" "-"`.zip `dirname $0`/`basename $0`' {} \;
so you would end up wil files named /backup/uploads/home-user-wp-content-uploads.zip and /backup/uploads/home-user-wip-uploads.zip instead of one zip overwriting the other
I am trying to CHMOD 777 all .doc files on my Mac. Is there is a way through Terminal that I could do this?
__
Thanks for the responses. I thought this was the way to change permissions on Word doc files. I have 2 users on Mac make that share a folder. But when one creates a doc file the other just has read permissions. I want both of them to have this, or everyone. It doesn'tmatter. I also want to go back and retroactively make all the past docs, some of which user A has read&write permissions, and some of which user B has read&write permissions for, read&writeable by both of them. Is there another way to do this? From what I can tell, this is a Mac permissions issue, nothing in Word. I thought CHMOD 777 in terminal was the way to do this.
Use find and xargs or the -exec option of find.
/home/erjablow> find . -name "*.doc" | xargs -n 3 chmod a+rwx
# Find the names of all "doc" files beneath the current directory.
# Gather them up 3 at a time, and execute
# chmod a+rwx file1 file2 file3
# on each set.
Now, find is a nasty utility, and hard to use. It can do ridiculous things, and one misspelling can ruin one's day.
Why do you want to make all doc files unprotected?
Don't.
If you really want to do this:
find / -x -name `*.doc` -type f -print0 | xargs -0 chmod 777
I have not tested this. I don't guarantee that the Mac versions of the find and xargs commands support all these options; I'm used to the GNU findutils versions that are common on Linux systems.
The find command starts at the root directory, limiting itself (-x) to the current filesystem, selects files whose names end in .doc, and prints them separated by null characters. The latter is necessary in case you have files or directories whose names contain spaces. The -x option is equivalent to the -xdev option in some other versions of find.
If you want to match files whose names in in .Doc, .DOC, etc., use -iname '*.doc' rather than -name '*.doc'.
The xargs command takes this null-delimited list of files and executes chmod 777 on each of them, breaking the list into chunks that won't overflow chmod's command line.
Let me emphasize again that:
I have not tested the above command. I don't currently have access to a MacOS system; I've just read the relevant man pages online. If you want to do a dry run, replace xargs by echo xargs; it will print the commands that it would execute (expect a lot of output with very long lines). Or replace xargs by xargs -n 1 echo to produce one line for each file.
Doing this in the first place is a bad idea. 777 gives read, write, and execute permission to every user on the system. .doc files are not executables; giving them execute permission makes no sense on a Unix-like system. 644 is likely to be a more sensible value (read/write for the owner, read-only for everyone else).
I agree with the previous response that find is nasty. And piping it to another utility can be tricky. If I had to do this, I'd specify the file type and use find's exec function instead of xargs.
find / -x -type f -iname *.doc \-exec chmod 777 {} \;
-x will search only the local Macintosh HD and exclude any mounted volumes
You may want to consider other criteria to refine the search, like a different starting point or excluding particular folders and/or file extensions
find / -x find / -not \( -path "*/some_sub-folder*" -or -path "*/some_other_sub-folder*" \) -type f \( -iname *.doc -or -iname *.docx \) \-exec chmod 777 {} \;
Or if you're set on using find|xargs, you should probably try -print instead of -print0 so that each file is returned as a separate line. That could fix it as well.
Either way, backup before doing this and test carefully. Good luck, HTH.
I am trying to strip all "?" in file names in a given directory who was got more subdirectories and they have subdirectories within it. I've tried using a simple perl regex script with system calls but it fails to recurse over each subdirectory, and going manually would be too much wasted time. How can I solve my problem?
You can use the find command to search the filenames with "?" and then use its exec argument to run a script which removes the "?" characters from the filename. Consider this script, which you could save to /usr/local/bin/rename.sh, for example (remember to give it +x permission):
#!/bin/sh
mv "$1" "$(echo $1| tr -d '?')"
Then this will do the job:
find -name "*\?*" -exec rename.sh {} \;
Try this :
find -name '*\?*' -exec prename 's/\?//g' {} +
See https://metacpan.org/module/RMBARKER/File-Rename-0.06/rename.PL (this is the default rename command on Ubuntu distros)
Find all the names with '?' and delete all of them. Probably -exec option could be used as well but would require additional script
for f in $(find $dir -name "*?*" -a -type f) ; do
mv $f ${f/?/}
done
I have a folder which contains a ~50 text files (PHP) and hundreds of images. I would like to move all the images to a subfolder, and update the PHP files so any reference to those images point to the new subfolder.
I know I can move all the images quite easily (mv *.jpg /image, mv *.gif /image, etc...), but don't know how to go about updating all the text files - I assume a Regex has to be created to match all the images in a file, and then somehow the new directory has to be appended to the image file name? Is this best done with a shell script? Any help is appreciated (Server is Linux/CentOs5)
Thanks!
sed with the -i switch is probably what you're looking for. -i tells sed to edit the file in-place.
Something like this should work:
find /my/php/location -name '*.php' | xargs sed -ie 's,/old/location/,/new/location/,g'
You could do it like this:
#!/bin/sh
for f in *.jpg *.png *.gif; do
mv $f gfx/
for p in *.txt; do
sed -i bak s,`echo $f`,gfx/`echo $f`,g $p
done
done
It finds all jpg/png/gif files and moves them to the "gfx" subfolder, then for each txt file (or whatever kind of file you want it edited in) it uses "sed" in-place to alter the path.
Btw. it will create backup files of the edited files with the extra extension of "bak". This can be avoided by omitting the "bak" part in the script.
This will move all images to a subdir called 'images' and then change only links to image files by adding 'images/' just before the basename.
mkdir images
mv -f *.{jpg,gif,png,jpeg} images/
sed -i 's%[^/"'\'']\+\.\(gif\|jpg\|jpeg\|png\)%images/\0%g' *.php
If you have thousands of files, you may need to utilize find and xargs. So, a bit slower
find ./ -regex '.*\(gif\|jpg\|png\|jpeg\)' -exec mv {} /tmp \;
find ./ -name \*.php -print0 | \
xargs -0 sed -i 's%[^/"'\'']\+\.\(gif\|jpg\|jpeg\|png\)%images/\0%g' *.php
Caution, it will also change the path to images with remote urls. Also, make sure you have a full backup of your directory, php syntax and variable names might cause problems.