So, I think I messed up bad. I was having trouble running a python file locally, and during troubleshooting, I read a post that said to run the below:
sudo chown $(my_username) ~/.aws/credentials sudo chown $(my_username) ~/.aws/config
I ran that (which presumably changed owner of those files from root to my_username), and now, when I run a python script that utilizes a pyspark session, I'm not able to read in any s3 parquet files!
Is there a way I can revert the ownership of those files back to root? Or is there a way I can just create new ones that root owns and delete the old ones? Am I even thinking about this correctly?
Please help!
You need to use braces in bash when doing parameter substitution.
sudo chown ${my_username} ~/.aws/credentials
Related
I have created my project in centOS 7 using root.Each time when saving the project after changes its asking the password.How do I change the permissions of the whole project now ?
To make everything writable by anyone run this in directory with your django project:
chmod -R 0777 ./
Are you sure you want to keep it owned by root? I'd suggest to change the owner to whatever services should use the project.
chown -R <some-user> /path/to/project # user may be www-data
chmod -R 755 /path/to/project
I am trying to upload images in django. I have set static directory in settings.py
MEDIA_ROOT = os.path.join(BASE_DIR, '/assets/image/')
MEDIA_URL='http://127.0.0.1:8000/assets/image/'
here is my model image Field
doImage=models.ImageField(upload_to='doImage/%Y/%m/%d',verbose_name='Do Image')
Now when i tried to upload it then i faced permission denied 13 error.
I had tried command chmod with 777 to give a permissions to folder
sudo chmod -R 777 assets
i also had tried change user of file using command
sudo chown -R hassan:hassan assets
But both things didn't worked for me. So anyone have idea that what's going wrong let me know.
Django stores files locally using MEDIA_ROOT and MEDIA_URL. Please refer this doc for more details.
For example you can also check this.
Don't do:
sudo chown -R root:root assets
This way only root user has rights over assets
Do:
sudo chown -R your_user:your_user /path/to/your/assets
By default python 2.75 is installed in my machine and I installed boto3 and awcli using pip install awscli boto3 -U --ignore-installed six command. And it got installed fine, I checked,but there I can't find .aws directory in my home directory. I tried to find using locate and find commands but no use. I want to know where that directory is to add a new profile to the credentials file in the .aws directory
You have to run aws configure to have it create the ~/.aws directory.
.aws hidden directory so you need to write command ls -a. This command will provide all hidden folders.
Also I found that .aws folder gets created in the adminstrator account(user) so just take that into account while working ,as if you don't have admin power then login through that and then check like in C:\Users\Administrator.aws
I am using s3smd package in ubuntu for uploading files on aws.
Now i want to add lifecycle rule for different objects inside bucket.
I can see the commands from http://s3tools.org/usage
According to s3cmd git page(https://github.com/s3tools/s3cmd/pull/295)
I am using like this
s3cmd put --recursive ${TMP_PATH}${FILENAME}${DATESTAMP}.tar.gz s3://${S3BUCKET}/${S3PATH}day/
s3cmd expire s3://${S3BUCKET} --expiry-days=365 --expiry-prefix=log/
but keep getting error
Usage: s3cmd [options] COMMAND [parameters]
s3cmd: error: no such option: --expiry-days
i am unable to find working example for how to add expiry date/lifecycle rule for a object in bucket.
Let me know what i am doing wrong
Thank you
I realise this is an old question, but it appears on google, so worth answering.
Ubuntu (14.04) ships with an old version of s3cmd. You can check this by running
s3cmd --version
The best thing to do is to install it with pip, the python package manager.
sudo apt-get remove s3cmd && sudo apt-get install python-pip && pip install s3cmd
You may need to log out, then back in to allow your path to s3cmd to update. But after that if you
s3cmd --version
You should get a much later version. Your expire flag should now work fine.
When I SSH into my AWS EB instance to run php artisan migrate, I get the following error message:
Link to bigger size of picture below
I am completely confused. First, I don't get this error on the local server. Second, what does a simple log file have to do with migrations anyway? They are ignored by git by default, so no log files are uploaded.
Sigh... Any ideas on how I can be allowed to run my php artisan migrate?
It's always the storage folder. Blank pages or permission denied, it's the darn storage folder.
I don't know how EB works, if it's a regular distro or what, but you should change ownership of the storage folder to the web server (www-data most likely) so it can build the views then set 775 permission so you can write/read logs.
So something like:
sudo chown -R www-data:www-data storage/
sudo chmod -R 775 storage/
I've gone through the same error
As stated here,
AWS AMI uses webapp as the web user, not apache or ec2-user as the
file shows. In that case, the webapp user has no access rights over
those files.
So, going through the steps mentioned in there fixed the problem
sudo chown $USER:webapp ./storage -R
find ./storage -type d -exec chmod 775 {} \;
find ./storage -type f -exec chmod 664 {} \;
Depending on what you're aiming to do afterwards you might need to go through this too.