ls: cannot open directory '.': Permission denied - amazon-web-services

I have my application running on an ec2 instance.
I can successfully ssh into my application but when I cd in to the correct folder and run ls I get the following error:
ls: cannot open directory '.': Permission denied
It seems like it has something to do with my user permissions because running the application also throws a 403 Forbidden error.
The permissions for my application folder are as follows:
d-wx-wx--x 17 ubuntu ubuntu 4096 Apr 20 10:53 application-name
Do I need to change this to something else to make it work? And how?

This error makes sense if you don't have enough privileges to read that directory. try changing the permissions for current user or change the access mode to global i.e 777
For example:
sudo bash
chmod 775 .

This is basically caused when the current user doesn't have enough permission to read/write/execute the contents of that directory.
Here's how you can fix it:
To grant the user permission to just the current directory, you could do this:
sudo chmod 775 directory_name
OR
sudo chmod a+rwx,o-w directory_name
To grant the user permission to the current directory, it's subdirectories and files, you could do this:
sudo chmod -R 775 directory_name
OR
sudo chmod -R a+rwx,o-w directory_name
Note:
chmod means change mode or in a more literal sense change access permissions.
-R means change files and directories recursively.
a means all users
r means read permission
w means write permission
x means execute permission
o means others
+ means add
- means remove.
So it means recursively add read, write and execute permissions to everyone, but then remove write permissions from others.
That's all.
I hope this helps

You don't have read permission on your folder.
Run chmod 775 application-name to allow read in your folder.
You'll find additional info about chmod at this link: https://kb.iu.edu/d/abdb

Related

Unable to SSH the Amazon EC2 instance using the .pem File. Error: " WARNING: UNPROTECTED PRIVATE KEY FILE! " [duplicate]

I'm working to set up Panda on an Amazon EC2 instance.
I set up my account and tools last night and had no problem using SSH to interact with my own personal instance, but right now I'm not being allowed permission into Panda's EC2 instance.
Getting Started with Panda
I'm getting the following error:
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
Permissions 0644 for '~/.ec2/id_rsa-gsg-keypair' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
I've chmoded my keypair to 600 in order to get into my personal instance last night, and experimented at length setting the permissions to 0 and even generating new key strings, but nothing seems to be working.
Any help at all would be a great help!
Hm, it seems as though unless permissions are set to 777 on the directory, the ec2-run-instances script is unable to find my keyfiles.
I've chmoded my keypair to 600 in order to get into my personal instance last night,
And this is the way it is supposed to be.
From the EC2 documentation we have "If you're using OpenSSH (or any reasonably paranoid SSH client) then you'll probably need to set the permissions of this file so that it's only readable by you." The Panda documentation you link to links to Amazon's documentation but really doesn't convey how important it all is.
The idea is that the key pair files are like passwords and need to be protected. So, the ssh client you are using requires that those files be secured and that only your account can read them.
Setting the directory to 700 really should be enough, but 777 is not going to hurt as long as the files are 600.
Any problems you are having are client side, so be sure to include local OS information with any follow up questions!
Make sure that the directory containing the private key files is set to 700
chmod 700 ~/.ec2
To fix this,
you’ll need to reset the permissions back to default:
sudo chmod 600 ~/.ssh/id_rsa
sudo chmod 600 ~/.ssh/id_rsa.pub
If you are getting another error:
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/home/geek/.ssh/known_hosts).
This means that the permissions on that file are also set incorrectly, and can be adjusted with this:
sudo chmod 644 ~/.ssh/known_hosts
Finally, you may need to adjust the directory permissions as well:
sudo chmod 755 ~/.ssh
This should get you back up and running.
I also got the same issue, but I fix it by changing my key file permission to 600.
sudo chmod 600 /path/to/my/key.pem
The private key file should be protected. In my case i have been using the public_key authentication for a long time and i used to set the permission as 600 (rw- --- ---) for private key and 644 (rw- r-- r--) and for the .ssh folder in the home folder you will have 700 permission (rwx --- ---). For setting this go to the user's home folder and run the following command
Set the 700 permission for .ssh folder
chmod 700 .ssh
Set the 600 permission for private key file
chmod 600 .ssh/id_rsa
Set 644 permission for public key file
chmod 644 .ssh/id_rsa.pub
Change the File Permission using chmod command
sudo chmod 700 keyfile.pem
On windows, Try using git bash and use your Linux commands there. Easy approach
chmod 400 *****.pem
ssh -i "******.pem" ubuntu#ec2-11-111-111-111.us-east-2.compute.amazonaws.com
Keep your private key, public key, known_hosts in same directory and try login as below:
ssh -I(small i) "hi.pem" ec2-user#ec2-**-***-**-***.us-west-2.compute.amazonaws.com
Same directory in the sense,
cd /Users/prince/Desktop.
Now type ls command
and you should see
**.pem **.ppk known_hosts
Note: You have to try to login from the same directory or you'll get a permission denied error as it can't find the .pem file from your present directory.
If you want to be able to SSH from any directory, you can add the following to you ~/.ssh/config file...
Host your.server
HostName ec2-user#ec2-**-***-**-***.us-west-2.compute.amazonaws.com
User ec2-user
IdentityFile ~/.ec2/id_rsa-gsg-keypair
IdentitiesOnly yes
Now you can SSH to your server regardless of where the directory is by simply typing ssh your.server (or whatever name you place after "Host").
Just to brief the issue, that pem files permissions are open for every user on machine i.e any one can read and write on that file
On windows it difficult to do chmod the way I found was using a git bash.
I have followed below steps
Remove user permissions
chmod ugo-rwx abc.pem
Add permission only for that user
chmod u+rw
run chmod 400
chmod 400 abc.pem
4.Now try ssh -i for your instance
If you are on a windows machine just copy the .pem file into C drive any folder and
re-run the command.
ssh -i /path/to/keyfile.pem user#some-host
In my case, I put that file in downloads and this actually works.
Or follow this https://99robots.com/how-to-fix-permission-error-ssh-amazon-ec2-instance/
I am thinking about something else, if you are trying to login with a different username that doesn't exist this is the message you will get.
So I assume you may be trying to ssh with ec2-user but I recall recently most of centos AMIs for example are using centos user instead of ec2-user
so if you are
ssh -i file.pem centos#public_IP please tell me you aretrying to ssh with the right user name otherwise this may be a strong reason of you see such error message even with the right permissions on your ~/.ssh/id_rsa or file.pem
The solution is to make it readable only by the owner of the file, i.e. the last two digits of the octal mode representation should be zero (e.g. mode 0400).
OpenSSH checks this in authfile.c, in a function named sshkey_perm_ok:
/*
* if a key owned by the user is accessed, then we check the
* permissions of the file. if the key owned by a different user,
* then we don't care.
*/
if ((st.st_uid == getuid()) && (st.st_mode & 077) != 0) {
error("###########################################################");
error("# WARNING: UNPROTECTED PRIVATE KEY FILE! #");
error("###########################################################");
error("Permissions 0%3.3o for '%s' are too open.",
(u_int)st.st_mode & 0777, filename);
error("It is required that your private key files are NOT accessible by others.");
error("This private key will be ignored.");
return SSH_ERR_KEY_BAD_PERMISSIONS;
}
See the first line after the comment: it does a "bitwise and" against the mode of the file, selecting all bits in the last two octal digits (since 07 is octal for 0b111, where each bit stands for r/w/x, respectively).
sudo chmod 700 ~/.ssh
sudo chmod 600 ~/.ssh/id_rsa
sudo chmod 600 ~/.ssh/id_rsa.pub
The above 3 commands should solve the problem!
Just a note for anyone who stumbles upon this:
If you are trying to SSH with a key that has been shared with you, for example:
ssh -i /path/to/keyfile.pem user#some-host
Where keyfile.pem is the private/public key shared with you and you're using it to connect, make sure you save it into ~/.ssh/ and chmod 777.
Trying to use the file when it was saved elsewhere on my machine was giving the OP's error. Not sure if it is directly related.

Getting permissions problems when trying to SSH bitnami wordpress install

Was trying to SSH to change permissions of a wordpress folder but wasn't able to...
bitnami#THE_IP_HERE: Permission denied (publickey).
Trying to see if it's the permissions on the key which I may need to change so I tried this command...
$ sudo chmod 600 /path/to/my/key.pem
and
$ sudo chmod 755 ~/.ssh
but the password found in EC2 System log doesn't seem to be correct. Am I missing something? Looking at the wrong place for the password? I am using the password given in this documentation: [https://docs.bitnami.com/aws/faq/get-started/find-credentials/] or maybe I am using an old key?

Transfer files to google compute engine instance in jupyter directory

I would like to transfer files from my computer (MacOS) to an instance using gcloud compute scp. I am trying to move the files to the /home/jupyter folder so I can work with them in JupyterLab. But somehow the full command gcloud compute scp ./myPath/myFile instance-name:/home/jupyter gives the error Permission denied.
Also I noticed that when navigating to this folder ~ appears. I think that means it is the actual home directory. So I tried gcloud compute scp ./myPath/myFile instance-name:~/ which works. But now the files were transferred to /home/username which seems to be the real home directory.
Is there a way to navigate back?
This problem is that you do not have permission to write to the /home/jupyter directory.
Step 1: Add your username to the same group as /home/jupyter. I will assume that the group name is jupyter. You can display the group name with ls -ld /home/jupyter.
sudo usermod -a -G jupyter your_user_name
Step 2: Make sure that the group has write permission:
sudo chmod g+w /home/jupyter
Note the above command only sets group write permission to /home/jupyter. If you want to add write permission to all subdirectores and files of /home/jupyter execute:
sudo chmod -R g+w /home/jupyter

Laravel 5.4 AWS server

I have created a Laravel project in laravel 5.4 and i have made it live using AWS server . Now the issue I face is I have to provide the 777 permission to storage folder very frequently and due to this the site is not working properly. Can anyone help me with this as what can be the issue ? I have already given 777 permission to storage folder but somehow the permission changes and site stops as it cannot write log in log file. Thanks in advance
Ideally giving 777 permissions means who have open the access to ANYONE in the world who can access your storage with all Read/Write permissions.
You need to assign permission to your Web server to access the Directories and files which you can do in following way:
www-XXX can be your webserver user
sudo chown -R www-xxx:www-xxx /path/to/your/laravel/root/directory
Now in order to grant the storage level permissions to your webserver you need to execute the below commands
sudo chgrp -R www-data storage bootstrap/cache
sudo chmod -R ug+rwx storage bootstrap/cache

Django server error 403 forbidden nginx/1.10.3 (ubuntu)

I have some media content in ubuntu server. I can upload files. but when I try to load files it shows 403 forbidden nginx/1.10.3 (ubuntu).In file permission, it displays rw--------.
How can I retrieve all content without error?
I'm not familiar with Ubuntu
I used this snippet to recover files. However, it only works the single time. After some while, it shows the same error.
sudo chmod -R 664 /home/django/media/image/
sudo chmod -R a+X /home/django/media/image/
The nginx user must be able to read those files. You can use group permissions to allow that. Also the wsgi user must have its umask set so that files it creates are readable for the group as well.
In your case it looks like your wsgi user has umask 077, which makes files it creates only readable by the owner (rw--------). Thus nginx does not have read permission. Instead use umask 027, which will permit group users to access those files, but not write to them (there's no reason for nginx to have write access).
For example if you are using gunicorn as your wsgi server, you can use gunicorn flags --group www --umask 027. Make sure both gunicorn and nginx user belongs to the www group.
Fix permission something like this.
# set group to `www` for all files recursively
sudo chgrp www -R /home/django/media/
# set all files to be read/write by owner and readable by group `www`
find /home/django/media/ -type f -exec chmod 640 {} ;
# same with directories +execute
find /home/django/media/ -type d -exec chmod 750 {} ;
Alternatively, use 644 for files and 755 for directories, and 022 for umask. Then group permissions don't matter, since all users gets read access.
The latter option is not security best practice, but it's probably fine, as long as you only give the django user write access.