I have set all the folders in my /var/www/upload directory to 755. But still my php application hosted on that server is not performing file uploads. Please tell me how to fix this permission problem and how my php file uploading will be working fine.
It does not shows any error. Just file is not moved to the intended folder.
Related
I'm trying to set up Moodle on a free Dyno on Heroku.
I started my project with the stable version 3.10 of Moodle on a new repository.
I've created the Heroku app and connected my github repository to it
I've created an S3 Bucket that allows read and write access to objects from anyone
On config.php:
I changed the database settings (to point to the free postgres database that I also created),
I've set wwwroot as my Heroku app (http://myherokuappexample.herokuapp.com/)
I've set dataroot as my S3 Bucket endpoint (https://mys3bucketnamexample.s3.eu-north-1.amazonaws.com/moodledata/)
I created the folder moodledata, added an image to it and I was able to access it from the browser (for example https://mys3bucketnamexample.s3.eu-north-1.amazonaws.com/moodledata/ola.png)
BUT I'm getting "Fatal error: $CFG->dataroot is not configured properly, directory does not exist or is not accessible! Exiting." when the app is deployed to Heroku.
What am I doing wrong?
I have a linux server (ubuntu) and I uploaded a project, in my project, I try to generate a pdf with reportlab, using some fonts for the created pdf.
I am having problems with permissions for reportlab. When I check out the error log in apache (/var/log/apache2/error.log), the error that appears is:
reportlab.pdfbase.ttfonts.TTFError: Can't open file "recursos/fonts/Lato-Regular.ttf"
I already gave full permissions (chmod 777) and also changed the owner to apache (chown :www-data) to the whole project, but the same error appears.
In my local machine, works perfectly.
I've got a Django project which works great. Previously we just cloned down and used password authentication. I changed the remote to git#bitbucket.org:myteam/our_repo.git
Recently we started requiring 2FA, so now we can only clone down over SSH.
For this project, I created an access key (read-only, which is all I need for cloning down on a staging server) and I was able to clone down the repo (git clone git#bitbucket.org:myteam/our_repo.git) without issue and get it all set up. This appeared to have worked.
The other server admin remoted in and tried to run git pull origin master, he got a permission issue. His windows user is part of the Administrators group - but for some reason that didn't matter. His local user had to be added to the directory with full access before he could run git pull origin master
It appears that this permission issue is causing other issues, too. File uploads (from the Django admin) are no longer actually uploading the files into the directory on the server - my guess is that this is related to the permissions issue, too. Nothing was changed to impact this - the project was just cloned down over SSH.
Does cloning something down over SSH change the permissions on the directories or somehow lock it down more? This wasn't an issue before - only since we've switched over to SSH.
Any feedback is helpful!
Does cloning something down over SSH change the permissions on the directories or somehow lock it down more?
No, it does not change anything locally.
And 2FA is only impacting HTTPS URL (where your password must be a PAT, Persoanl Access Token)
It has no bearing on SSH URLS.
Check first ssh -Tv git#github.com output.
I have installed Jenkins and Apache to one instance of CentOS in aws. I have connected Jenkins with github but I am not able to access the application through url as it shows the following error.
You should add your website content to the directory /var/www/html/.
I need to copy files from jenkins directory to the mentioned one can you please help me how to copy app so that I can use it on browser.
I'm working with a website running on laravel. The site works fine on my local through Homestead, no problems.
Recently, I pushed the git repo up to a server that never had this site running on it before. I set everything up right (had some nginx config issues for a while, but got those all sorted out). Nginx has the public folder set as the site root, so it hits the proper index page when you load the page.
What I'm getting is a 500 error. My error logs reveal the following is the reason:
site_root/public/../bootstrap/autoload.php - Failed to open stream: permission denied
in
site_root/public/index.php on line 22
I can confirm that the bootstrap folder and the autoload.php file are both accessible by the web user, and have permissions that should allow access.
I've read a few cases online of people solving this issue with a 'composer install'. I tried updating composer, doing an install, and dumping its cache. I also tried removing the vendor folder (which had been a part of the git repo), and running composer install to regenerate it. None of these have worked. Happy to supply any info that will help. This is Laravel 5.2 running on Ubuntu Server 14.04 with nginx, all on an AWS box.
Solved it. This was actually an issue with site-wide permissions. They were set to 770 instead of 775. I suspect that I can and should restrict them more. For now, I'm just happy to have it loading again.
Moral of the story is to check your permissions site-wide, not necessarily just on the file which gives you the fatal error. You may continue to get the same fatal error, despite permissions being wide-open on the mentioned file. If so, look for permissions issues elsewhere.