Hi folks I seem to have a problem of copying a file to my AWS instance by an Sftp with MobaXterm
I am receiving "permission denied" error. If I try to change ownership or permissions of my try/try2 folders, I receive "no such file or dirrectory" error.
I know, there is a similar question here, but it appears to have nothing new to what I have done (Amazon AWS Filezilla transfer permission denied)
Here's what I have done (I am-root, of course):
[root#ec2-user]# ls
[root#ec2-user]# mkdir try
[root#ec2-user]# ls
try
[root#ec2-user]# chown -R ec2-user /try
chown: cannot access `/try': No such file or directory
[root#ec2-user]# chown -R ec2-user /try/
chown: cannot access `/try/': No such file or directory
[root#ec2-user]# ls
try
[root#ec2-user]# chown -R ec2-user/try
chown: missing operand after `ec2-user/try'
Try `chown --help' for more information.
[root#ec2-user]# cd try
[root#try]# mkdir try2
[root#try]# ls
try2
[root#try]# cd ..
[root#ec2-user]# chown -R ec2-user try/try2
chown: cannot access `try/try2': No such file or directory
[root#ec2-user]# chown -R ec2-user /try/try2
chown: cannot access `/try/try2': No such file or directory
[root#ec2-user]# cmod -R 755 /try/try2
-bash: cmod: command not found
[root#ec2-user]# chmod -R 755 /try/try2
chmod: cannot access `/try/try2': No such file or directory
[root#ec2-user]#
It seems like a low effort, but it's more, than it seems, I've been sitting on this for last hour and a half and I need a solution fast - have to upload a whole heap of things to future public dir.
Also tried using full path:
[root#ec2-user]# chown ec2-user /home/ec2-user/try/try2
[root#ec2-user]# chown -R ec2-user /home/ec2-user/try/
[root#ec2-user]# cmod -R 755 ec2-user /home/ec2-user/try/
-bash: cmod: command not found
[root#ec2-user]# chmod -R 755 ec2-user /home/ec2-user/try/
chmod: cannot access `ec2-user': No such file or directory
[root#ec2-user]# ls
try
[root#ec2-user]# cd ..
[root#home]# ls
ec2-user
[root#home]# cd ec2-user/
[root#ec2-user]# ls
try
[root#ec2-user]# cd ..
[root#home]# ls -la
total 12
drwxr-xr-x. 3 root root 4096 Mar 20 04:18 .
dr-xr-xr-x. 26 root root 4096 Mar 20 04:18 ..
drwx------. 4 ec2-user ec2-user 4096 Mar 20 05:34 ec2-user
[root#home]# chown -R root /home/ec2-user/try/
[root#home]# ls -la
total 12
drwxr-xr-x. 3 root root 4096 Mar 20 04:18 .
dr-xr-xr-x. 26 root root 4096 Mar 20 04:18 ..
drwx------. 4 ec2-user ec2-user 4096 Mar 20 05:34 ec2-user
[root#home]#
Where is my mistake? It's supposed to be a really simple thing
I got it...It's all because of me connecting with RSA key. The AWS provides a key for login of specific user and to a specific folder, thus sftp protocol from 3'rd party software will apply only to that folder. Any subfolder will not be recognized for drug-and-drop software, such as Moba, Putty or similar, unless the connection method will not be altered. However, once inside the AWS machine, all the required actions can be easily performed without any problem (well, according to the permissions).
Moderators, please do not erase this topic - might be helpful to someone.
Related
medusa# chmod 755 /home
medusa# cd home
medusa# ls
Android-SDK Dev euryale lost+found test1 virtualbox installscript.sh vmware-tools-patches
medusa# whoami
root
medusa# ./vmware-tools-patches
zsh: permission denied: ./vmware-tools-patches
medusa#
I have a user-data bootstrap script that creates a folder called content in root directory and downloads files from an S3 bucket.
#!/bin/bash
sudo yum update -y
sudo yum search docker
sudo yum install docker -y
sudo usermod -a -G docker ec2-user
id ec2-user
newgrp docker
sudo yum install python3-pip -y
sudo pip3 install docker-compose
sudo systemctl enable docker.service
sudo systemctl start docker.service
export PATH=$PATH:/usr/local/bin
mkdir content
docker network create web_todos
docker run -d -p 80:80 --name nginx-proxy --network=web_todos -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
aws s3 cp s3://jv-pocho/docker-compose.yaml .
aws s3 cp s3://jv-pocho/backup.sql .
aws s3 cp s3://jv-pocho/dns-updater.sh .
aws s3 sync s3://jv-pocho/images/ ./content/images
aws s3 sync s3://jv-pocho/themes/ ./content/themes
docker-compose up -d
sleep 30
docker exec -i db_jv sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < backup.sql
rm backup.sql
chmod +x dns-updater.sh
This bootstrap works ok, it creates the folder and download the files (it has permissions to download the files) i.e.:
download: s3://jv-pocho/dns-updater.sh to ./dns-updater.sh
[ 92.739262] cloud-init[3203]: Completed 32.0 KiB/727.2 KiB (273.1 KiB/s) with 25 file(s) remaining
so it's copying all the files correctly. The thing is that when i enter via SSH to the instance, i don't have any files inside
[ec2-user#ip-x-x-x-x ~]$ ls
[ec2-user#ip-x-x-x-x ~]$ ls -l
total 0
all commands worked as expected, all the yum installs, python, docker, etc were successfully installed, but no files.
Are files deleted after the bootstrap script ran?
thanks!
Try to copy them in a specific path, then look for it. Because here we don't know which path it's going to use.
Use the following command for specific path:
aws s3 cp s3://Bucket-name/Objet /Path
else you can do one thing,
use pwd command to get the current directory and print it using echo command so that you will get the present working directory.
I can not add a comment to this post - https://stackoverflow.com/a/21797786/6143954.
So I created a new question.
It was the correct answer before it was edited. In this answer, the line
sudo chmod -R 770 /var/www/
is replaced by
sudo chmod -R 760 /var/www/
Specifically, this solution is not suitable for Django.
The answer should not be changed after it has been marked as the right solution.
That was the correct answer before correcting the original post.
The GOOD solution would be:
sudo groupadd varwwwusers
sudo adduser www-data varwwwusers
sudo chgrp -R varwwwusers /var/www/
sudo chmod -R 770 /var/www/
How correct is this solution?
sudo chmod -R 770 /var/www/ is fine.
This means that owner and group has all rights and others don't have any rights.
This is right way.
If you set 760, group users will get Permission denied on read or write attempts.
For files inside directory you can set them as 760.
I have my instance of the Amazon AWS running, test page is up.
I am trying to SFTP the files to the server to display my website. I have Filezilla connected to the AWS server but when I try to move the files from my local machine to the /var/www/html directory, it says permission denied.
I just figured out I CAN move the files to the /home/ec2-user directory. So my files are on the server I guess. But when I try to move them from there to the /var/www/html directory, it still won't move them, permission denied.
I've been researching this for approximately 2 hours now but I haven't been able to locate the answer to this.
Any help is greatly appreciated, i'm so close! Haha
Thanks
UPDATE
To allow user ec2-user (Amazon AWS) write access to the public web directory (/var/www/html),
enter this command via Putty or Terminal, as the root user sudo:
sudo chown -R ec2-user /var/www/html
Make sure permissions on that entire folder were correct:
sudo chmod -R 755 /var/www/html
Doc's:
Setting up amazon ec2-instances
Connect to Amazon EC2 file directory using Filezilla and SFTP (Video)
Understanding and Using File Permissions
if you are using centOs then use
sudo chown -R centos:centos /var/www/html
sudo chmod -R 755 /var/www/html
For Ubuntu
sudo chown -R ubuntu:ubuntu /var/www/html
sudo chmod -R 755 /var/www/html
For Amazon ami
sudo chown -R ec2-user:ec2-user /var/www/html
sudo chmod -R 755 /var/www/html
In my case the /var/www/html in not a directory but a symbolic link to the /var/app/current, so you should change the real directoy ie /var/app/current:
sudo chown -R ec2-user /var/app/current
sudo chmod -R 755 /var/app/current
I hope this save some of your times :)
If you're using Ubuntu then use the following:
sudo chown -R ubuntu /var/www/html
sudo chmod -R 755 /var/www/html
This work best everyone
chmod ugo+rwx your-folder
https://help.ubuntu.com/community/FilePermissions
In my case, after 30 minutes changing permissions, got into account that the XLSX file I was trying to transfer was still open in Excel.
for me below worked:
chown -R ftpusername /var/app/current
If I download fossil v1.23 to my linux server, everything goes fine.
root#x:/home/james/test# unzip fossil-linux-x86-20120808112557.zip
Archive: fossil-linux-x86-20120808112557 (1).zip
inflating: fossil
root#x:/home/james/test# mv fossil fossil_1_23
root#x:/home/james/test# chmod 777 fossil_1_23
root#x:/home/james/test# ./fossil_1_23 help
Usage: ./fossil_1_23 help COMMAND
Common COMMANDs: (use "./fossil_1_23 help --all" for a complete list)
add clean gdiff mv rm timeline
addremove clone help open settings ui
all commit import pull sqlite3 undo
annotate diff info push stash update
bisect export init rebuild status version
branch extras ls remote-url sync
changes finfo merge revert tag
This is fossil version 1.23 [957b17af58] 2012-08-08 11:25:57 UTC
But 1.24 fails, with 'No such file' even though a ls command shows the file is present, like this:
root#x:/home/james/test# unzip fossil-linux-x86-20121022124804.zip
Archive: fossil-linux-x86-20121022124804.zip
inflating: fossil
oot#x:/home/james/test# mv fossil fossil_1_24
root#x:/home/james/test# chmod 777 fossil_1_24
root#x:/home/james/test# ls -l
total 3620
-rw-r--r-- 1 root root 528859 Oct 24 10:04 fossil-linux-x86-20120808112557.zip
-rw-r--r-- 1 root root 670298 Oct 24 10:04 fossil-linux-x86-20121022124804.zip
-rwxrwxrwx 1 root root 1061584 Aug 11 10:30 fossil_1_23
-rwxrwxrwx 1 root root 1418656 Oct 22 09:16 fossil_1_24
root#x:/home/james/test# ./fossil_1_24 help
-bash: ./fossil_1_24: No such file or directory
Richard Hipp rebuilt the 1.24 linux binary. The new version works fine.