Amazon AWS Filezilla transfer permission denied - amazon-web-services

I have my instance of the Amazon AWS running, test page is up.
I am trying to SFTP the files to the server to display my website. I have Filezilla connected to the AWS server but when I try to move the files from my local machine to the /var/www/html directory, it says permission denied.
I just figured out I CAN move the files to the /home/ec2-user directory. So my files are on the server I guess. But when I try to move them from there to the /var/www/html directory, it still won't move them, permission denied.
I've been researching this for approximately 2 hours now but I haven't been able to locate the answer to this.
Any help is greatly appreciated, i'm so close! Haha
Thanks
UPDATE

To allow user ec2-user (Amazon AWS) write access to the public web directory (/var/www/html),
enter this command via Putty or Terminal, as the root user sudo:
sudo chown -R ec2-user /var/www/html
Make sure permissions on that entire folder were correct:
sudo chmod -R 755 /var/www/html
Doc's:
Setting up amazon ec2-instances
Connect to Amazon EC2 file directory using Filezilla and SFTP (Video)
Understanding and Using File Permissions

if you are using centOs then use
sudo chown -R centos:centos /var/www/html
sudo chmod -R 755 /var/www/html
For Ubuntu
sudo chown -R ubuntu:ubuntu /var/www/html
sudo chmod -R 755 /var/www/html
For Amazon ami
sudo chown -R ec2-user:ec2-user /var/www/html
sudo chmod -R 755 /var/www/html

In my case the /var/www/html in not a directory but a symbolic link to the /var/app/current, so you should change the real directoy ie /var/app/current:
sudo chown -R ec2-user /var/app/current
sudo chmod -R 755 /var/app/current
I hope this save some of your times :)

If you're using Ubuntu then use the following:
sudo chown -R ubuntu /var/www/html
sudo chmod -R 755 /var/www/html

This work best everyone
chmod ugo+rwx your-folder
https://help.ubuntu.com/community/FilePermissions

In my case, after 30 minutes changing permissions, got into account that the XLSX file I was trying to transfer was still open in Excel.

for me below worked:
chown -R ftpusername /var/app/current

Related

Files downloaded with user-data deleted?

I have a user-data bootstrap script that creates a folder called content in root directory and downloads files from an S3 bucket.
#!/bin/bash
sudo yum update -y
sudo yum search docker
sudo yum install docker -y
sudo usermod -a -G docker ec2-user
id ec2-user
newgrp docker
sudo yum install python3-pip -y
sudo pip3 install docker-compose
sudo systemctl enable docker.service
sudo systemctl start docker.service
export PATH=$PATH:/usr/local/bin
mkdir content
docker network create web_todos
docker run -d -p 80:80 --name nginx-proxy --network=web_todos -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
aws s3 cp s3://jv-pocho/docker-compose.yaml .
aws s3 cp s3://jv-pocho/backup.sql .
aws s3 cp s3://jv-pocho/dns-updater.sh .
aws s3 sync s3://jv-pocho/images/ ./content/images
aws s3 sync s3://jv-pocho/themes/ ./content/themes
docker-compose up -d
sleep 30
docker exec -i db_jv sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < backup.sql
rm backup.sql
chmod +x dns-updater.sh
This bootstrap works ok, it creates the folder and download the files (it has permissions to download the files) i.e.:
download: s3://jv-pocho/dns-updater.sh to ./dns-updater.sh
[ 92.739262] cloud-init[3203]: Completed 32.0 KiB/727.2 KiB (273.1 KiB/s) with 25 file(s) remaining
so it's copying all the files correctly. The thing is that when i enter via SSH to the instance, i don't have any files inside
[ec2-user#ip-x-x-x-x ~]$ ls
[ec2-user#ip-x-x-x-x ~]$ ls -l
total 0
all commands worked as expected, all the yum installs, python, docker, etc were successfully installed, but no files.
Are files deleted after the bootstrap script ran?
thanks!
Try to copy them in a specific path, then look for it. Because here we don't know which path it's going to use.
Use the following command for specific path:
aws s3 cp s3://Bucket-name/Objet /Path
else you can do one thing,
use pwd command to get the current directory and print it using echo command so that you will get the present working directory.

Initiate EC2 instance with pack of comands

Is there a way to start AWS EC2 instance with pack of commands?
So im creating a new instance and thing i wan't to achieve is run some linux commands automatically after starting it without connecting with machine and typing those commands manually.
This is exactly the purpose of UserData.
You would list your script (bash for Linux, or Powershell for Windows), this will then run on the first time the instance runs.
An example user data taken from the documentation to perform the setup of a web server is below.
#!/bin/bash
yum update -y
amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2
yum install -y httpd mariadb-server
systemctl start httpd
systemctl enable httpd
usermod -a -G apache ec2-user
chown -R ec2-user:apache /var/www
chmod 2775 /var/www
find /var/www -type d -exec chmod 2775 {} \;
find /var/www -type f -exec chmod 0664 {} \;
echo "<?php phpinfo(); ?>" > /var/www/html/phpinfo.php
In the event you need to debug take a look at the /var/log/cloud-init-output.log log once the instance has launched.
However, if there are a larger number of steps it might be preferable to create a pre-baked AMI which involves setting up a blank server with all the necessary services and configuration using a tool such as Ansible, Chef or Puppet.

Django [Errno 13] Permission denied: '/var/www/media/'

I can not add a comment to this post - https://stackoverflow.com/a/21797786/6143954.
So I created a new question.
It was the correct answer before it was edited. In this answer, the line
sudo chmod -R 770 /var/www/
is replaced by
sudo chmod -R 760 /var/www/
Specifically, this solution is not suitable for Django.
The answer should not be changed after it has been marked as the right solution.
That was the correct answer before correcting the original post.
The GOOD solution would be:
sudo groupadd varwwwusers
sudo adduser www-data varwwwusers
sudo chgrp -R varwwwusers /var/www/
sudo chmod -R 770 /var/www/
How correct is this solution?
sudo chmod -R 770 /var/www/ is fine.
This means that owner and group has all rights and others don't have any rights.
This is right way.
If you set 760, group users will get Permission denied on read or write attempts.
For files inside directory you can set them as 760.

Restream a mp3 stream over https with ssl

I need to restream several existing mp3 streams over https.
I have a current stream with the url :
http://cdn.stream.com/radio.mp3
and I would like to have it as :
https://cdn.newstream.com/radio.mp3
I have seen several solutions such as :
rebuild my own cast with icecast
nginx proxy
stunnel
cloudfront (could be expensive)
or a paid service : https://www.autopo.st/secure-streams/
But couldn't find an simple tutorial with a cheap solution using AWS.
Is there any way to secure an existing stream in a cheap way using AWS ?
Thanks,
If you are running Debian or Ubuntu, just install Icecast from the official Xiph.org repositories:
https://wiki.xiph.org/Icecast_Server/Installing_latest_version_(official_Xiph_repositories)
It has TLS support built in.
The certificate needs to be provided as a combined file, with both public and private key in the same file. In case of Letsencrypt - some ACME clients can natively produce that sort of output.
As you don't specify if you control the origin server or need to relay an external server I won't venture into further explanations, please clarify your question if you need specific aspects covered.
With the help of a freelancer, here is what I ended up doing.
I register a domain mydomain.com and pointed it to a ubuntu machine hosted Hertzner to get a good value for money on network traffic.
So mydomain.com is pointing to the ip of the server 130.130.130.130
run this on the machine :
cd /usr/local/bin
wget https://www.dropbox.com/s/lnk9mriccwydhow/caddy
chown root:root /usr/local/bin/caddy
chmod 755 /usr/local/bin/caddy
setcap 'cap_net_bind_service=+ep' /usr/local/bin/caddy
groupadd -g 33 www-data
useradd -g www-data --no-user-group --home-dir /var/www --no-create-home --shell /usr/sbin/nologin --system --uid 33 www-data
mkdir /etc/caddy
chown -R root:www-data /etc/caddy
mkdir /etc/ssl/caddy
chown -R root:www-data /etc/ssl/caddy
chmod 0770 /etc/ssl/caddy
wget https://raw.githubusercontent.com/mholt/caddy/master/dist/init/linux-systemd/caddy.service
cp caddy.service /etc/systemd/system/
chown root:root /etc/systemd/system/caddy.service
chmod 644 /etc/systemd/system/caddy.service
systemctl daemon-reload
systemctl start caddy.service
create the file /etc/caddy/Caddyfile with this content
securedStream.mydomain.com {
proxy / http://originStream.com
}
Then run these one to run caddy to add autostart, start and check status
systemctl enable caddy
systemctl start caddy
systemctl status caddy
Access to https://securedStream.mydomain.com/

PHP Fatal Error on 'php artisan migrate' on remote AWS EB instance: laravel.log: Permission denied

When I SSH into my AWS EB instance to run php artisan migrate, I get the following error message:
Link to bigger size of picture below
I am completely confused. First, I don't get this error on the local server. Second, what does a simple log file have to do with migrations anyway? They are ignored by git by default, so no log files are uploaded.
Sigh... Any ideas on how I can be allowed to run my php artisan migrate?
It's always the storage folder. Blank pages or permission denied, it's the darn storage folder.
I don't know how EB works, if it's a regular distro or what, but you should change ownership of the storage folder to the web server (www-data most likely) so it can build the views then set 775 permission so you can write/read logs.
So something like:
sudo chown -R www-data:www-data storage/
sudo chmod -R 775 storage/
I've gone through the same error
As stated here,
AWS AMI uses webapp as the web user, not apache or ec2-user as the
file shows. In that case, the webapp user has no access rights over
those files.
So, going through the steps mentioned in there fixed the problem
sudo chown $USER:webapp ./storage -R
find ./storage -type d -exec chmod 775 {} \;
find ./storage -type f -exec chmod 664 {} \;
Depending on what you're aiming to do afterwards you might need to go through this too.