Setting write permissions on Django Log files - django

I have a set of Django Log files, for which I have the appropriate logger set to write out messages. However each time it creates a new log file, the permissions on the file don't allow me to start the shell, and at times cause issues with apache.
I have ran chmod -Rv 777 on the directory, which sets all the permissions so we can do what we like, but the next logfile created, goes back to some default.
How can I set permissions on the logfiles to be created
Marc

Permissions on files created by a particular user depend on what mask is set for this particular user.
Now you need to set the appropriate permissions for whoever is running the apache service
ps -aux | grep apache | awk '{ print $1 }'
Then for this particular user running apache (www-data?)
sudo chown -R your_user:user_running_apache directory
where directory is the root directory of your django application.
To make sure that all the files that will be added to this directory in the future have
the correct permissions run:
sudo chmod -R g+s directory

I faced with the same problem - I had issues with starting shell and with celery due to rotated-log file permissions. I'm running my django-project through the uwsgi (which is running by www-data user) - so I handled it by setting umask for it (http://uwsgi-docs.readthedocs.org/en/latest/Options.html#umask).
Also I'm using buildout, so my fix looks like this:
[uwsgi]
recipe = buildout.recipe.uwsgi
xml-socket = /tmp/uwsgi.sock
xml-master = True
xml-chmod-socket = 664
xml-umask = 0002
xml-workers = 3
xml-env = ...
xml-wsgi-file = ...
After this log file permissions became 664, so group members of www-data group can also write into it.

Related

$HOME is not set for ec2-user during commands in User Data run

I put the following commands in user data of an EC2 running RedHat 8 AMI (ami-0fc841be1f929d7d1), when they run, the mkdir tries to create .kube at root which looks to me like $HOME is not set at the time.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Following are log from /var/log/user-data.log
+ mkdir -p /.kube
+ sudo cp -i /etc/kubernetes/admin.conf /.kube/config
++ id -u
++ id -g
+ sudo chown 0:0 /.kube/config
When I SSH to the instance, the $HOME is set correctly to /home/ec2-user.
Could you advise what I did wrong here?
Thank you
When your EC2 server is provisioned, the user data script runs as user root, so $HOME is empty. What you could do, is to define the HOME env var at the top of your user data script, like this (insert your user's home directory here):
export HOME=/home/ubuntu
I've tried it and it works (I install NVM, SDKMAN, sbt, java, git, docker; all works fine). You might need to do some chown at the end of your user data script to change the owner of some files back to your user. For example, if your user data sets up some files in your home directory:
chown ubuntu ~/.foo/bar.properties
$HOME refers to the home directory of the logged in user. Userdata runs under the root user, and the root user $HOME is /. That is the result you are seeing.
Instead of the variable $HOME, your script should refer to /home as a literal.
See https://superuser.com/questions/271925/where-is-the-home-environment-variable-set
You are running as sudo which is known to change environment variables that are established with your users shell (such as $HOME) as well as shell context based such as ssh-agent.
Generally you can ensure this persists when you run sudo by adding it to the env_keep settings in your sudoers configuration by adding the below line within /etc/sudoers. More information is available here, be careful about modifying this file.
Defaults env_keep=HOME
Otherwise if you don't want to make the above change, ensure you have the permissions to carry this out without running sudo or pass an absolute path value in.
I would generally stay clear of user data for important configuration anyway,
instead build a pre-baked AMI ahead of time with the configuration how you want it, using a configuration tool such as Ansible, Chef, Puppet.
Alternatively as this is within the User Data anyway, it is unlikely you have already configured the sudoers configuration, you should instead just specify the path.
I faced the same issue. Adding this to the User Data script helped resolve it. The sub shells will have the HOME set with this change to profile.
cat > /etc/profile.d/set_home.sh << 'EOF'
export HOME=~
EOF
chmod a+x /etc/profile.d/set_home.sh

I have Deployed My Django App in apache server . All the static files are getting served properly but its not accepting media files from Form Input

While Submitting a form having media input its showing
[Errno 13] Permission denied: '/home/ubuntu/django/media/pictures'
I have searched in google but no one told giving permissions for media files , they ever all telling about static files only . Can any one please tell me which permission i have to give to it with chmod no.
You're having permissions issues. To fix it, you need to allow the Apache process access to the folder and it's content.
This can be done through the following steps
Change the permissions to read and write
chmod 664 /home/ubuntu/django/media/pictures -R
Give the group Apache runs under (www-data group) group ownership of the folder and its content
sudo chown -R :www-data ~/home/ubuntu/django/media/pictures
Restart the Apache service
sudo service apache2 restart
If you want to ensure Django behaves as it should, you can also add the following to your settings.py
FILE_UPLOAD_DIRECTORY_PERMISSIONS = 0o755
FILE_UPLOAD_PERMISSIONS = 0o644

Singularity - Centos7 - Permission

i use singularity on Centos7 system, but i block on incomprehensible like-permission problem.
(It's a centrifuge/recentrifuge container made by myself that perfectly working on Ubuntu system)
command
singularity exec /HOMEPATH/Singularity/centrifuge_recentrifuge.simg centrifuge -x /HOMEPATH/Centrifuge/bacteria-database-centrifuge -1 /HOMEPATH/work_directory/centrifuge_recentrifuge/reads/TTOTO_R1_001.fastq.gz -2 /HOMEPATH/work_directory/centrifuge_recentrifuge/reads/TOTO_R2_001.fastq.gz -S /HOMEPATH/work_directory/centrifuge_recentrifuge/classification_result --report-file /HOMEPATH/work_directory/centrifuge_recentrifuge/classification_summary
 error log
Error: Could not open alignment output file /HOMEPATH/work_directory/centrifuge_recentrifuge/classification_result
Error: Encountered internal Centrifuge exception (#1)
Command: /usr/local/bin/centrifuge-class --wrapper basic-0 -x /HOMEPATH/Centrifuge/bacteria-database-centrifuge -S /HOMEPATH/work_directory/centrifuge_recentrifuge/classification_result --report-file /HOMEPATH/work_directory/centrifuge_recentrifuge/classification_summary -1 /tmp/229778.inpipe1 -2 /tmp/229778.inpipe2
(ERR): centrifuge-class exited with value 1
It's seeem like singularity can not write the tmp files or write the classification_result file or the both :/
work directory permission
ls -Z /HOMEPATH/work_directory/centrifuge_recentrifuge/
drwxr-xr-x. root root system_u:object_r:httpd_sys_content_t:s0 reads
ls -Z /HOMEPATH/work_directory/centrifuge_recentrifuge/reads/
-rw-r--r--. root root system_u:object_r:httpd_sys_content_t:s0 TOTO_R1_001.fastq.gz
-rw-r--r--. root root system_u:object_r:httpd_sys_content_t:s0 TOTO_R2_001.fastq.gz
EDIT 1 Resolve Permission Problem
ls -Z centrifuge_recentrifuge
drwxr-xr-x. apache apache system_u:object_r:httpd_sys_content_t:s0 reads
ls -Z centrifuge_recentrifuge/reads/
-rw-r--r--. apache apache system_u:object_r:httpd_sys_content_t:s0 TOTO_R1_001.fastq.gz
-rw-r--r--. apache apache system_u:object_r:httpd_sys_content_t:s0 TOTO_R2_001.fastq.gz
And the error is still the same...
I have make a sudo chown -R apache:apache /tmp on the tmp folder but it's does not make effect :/
ls -Z /HOMEPATH/work_directory/centrifuge_recentrifuge/
drwxr-xr-x. root root system_u:object_r:httpd_sys_content_t:s0 reads
This says that the only the owner of the centrifuge_recentrifuge directory has permission to create items in that directory, and that the owner of the directory is the user root. So, only root can create items in that directory.
Presumably you were not running the singularity program while logged in as root and that's why the program was unable to create a classification_result file. It wouldn't have been able to create a classification_summary file either, if it had got as far as trying to do that.
I don't know if you have a special reason for having this directory owned by root. If you do, then the only way the program is going to be able to create these files is if you run it as the root user. Of course it's generally a bad idea to use the root account for anything other than system administration.
The usual approach is to have the HOMEPATH directory, and everything below it, be owned by the individual (non-root) user for whom that particular HOMEPATH was created. In this model, that individual user would be the owner of the centrifuge_recentrifuge directory, and therefore if you run the singularity program when logged in as that user it will be able to create whatever files it needs there.
To get to that situation from where you are now, to change the ownership of HOMEPATH and everything beneath it you can log in as root (or use sudo) and then run:
chown -R myuser /HOMEPATH
where myuser is the username of the account that has HOMEPATH as its home directory.
That should be enough to let the program run. However, for completeness you should also change the group ownership of HOMEPATH and everything beneath it to match the individual user's group. To do that, run:
chown -R myuser:mygroup /HOMEPATH
where mygroup is the group that contains the user myuser. If you don't know what that group name should be, log in as myuser and run the id -ng command. It's common to have the group name be the same as the user name, so don't be surprised if the result of that id command is the same as myuser. On some systems you can run:
chown -R myuser: /HOMEPATH
with just a colon : after myuser and the command will figure out the group name for you. If that works on your system then you don't need to do the id -ng dance.

Django server error 403 forbidden nginx/1.10.3 (ubuntu)

I have some media content in ubuntu server. I can upload files. but when I try to load files it shows 403 forbidden nginx/1.10.3 (ubuntu).In file permission, it displays rw--------.
How can I retrieve all content without error?
I'm not familiar with Ubuntu
I used this snippet to recover files. However, it only works the single time. After some while, it shows the same error.
sudo chmod -R 664 /home/django/media/image/
sudo chmod -R a+X /home/django/media/image/
The nginx user must be able to read those files. You can use group permissions to allow that. Also the wsgi user must have its umask set so that files it creates are readable for the group as well.
In your case it looks like your wsgi user has umask 077, which makes files it creates only readable by the owner (rw--------). Thus nginx does not have read permission. Instead use umask 027, which will permit group users to access those files, but not write to them (there's no reason for nginx to have write access).
For example if you are using gunicorn as your wsgi server, you can use gunicorn flags --group www --umask 027. Make sure both gunicorn and nginx user belongs to the www group.
Fix permission something like this.
# set group to `www` for all files recursively
sudo chgrp www -R /home/django/media/
# set all files to be read/write by owner and readable by group `www`
find /home/django/media/ -type f -exec chmod 640 {} ;
# same with directories +execute
find /home/django/media/ -type d -exec chmod 750 {} ;
Alternatively, use 644 for files and 755 for directories, and 022 for umask. Then group permissions don't matter, since all users gets read access.
The latter option is not security best practice, but it's probably fine, as long as you only give the django user write access.

ls: cannot open directory '.': Permission denied

I have my application running on an ec2 instance.
I can successfully ssh into my application but when I cd in to the correct folder and run ls I get the following error:
ls: cannot open directory '.': Permission denied
It seems like it has something to do with my user permissions because running the application also throws a 403 Forbidden error.
The permissions for my application folder are as follows:
d-wx-wx--x 17 ubuntu ubuntu 4096 Apr 20 10:53 application-name
Do I need to change this to something else to make it work? And how?
This error makes sense if you don't have enough privileges to read that directory. try changing the permissions for current user or change the access mode to global i.e 777
For example:
sudo bash
chmod 775 .
This is basically caused when the current user doesn't have enough permission to read/write/execute the contents of that directory.
Here's how you can fix it:
To grant the user permission to just the current directory, you could do this:
sudo chmod 775 directory_name
OR
sudo chmod a+rwx,o-w directory_name
To grant the user permission to the current directory, it's subdirectories and files, you could do this:
sudo chmod -R 775 directory_name
OR
sudo chmod -R a+rwx,o-w directory_name
Note:
chmod means change mode or in a more literal sense change access permissions.
-R means change files and directories recursively.
a means all users
r means read permission
w means write permission
x means execute permission
o means others
+ means add
- means remove.
So it means recursively add read, write and execute permissions to everyone, but then remove write permissions from others.
That's all.
I hope this helps
You don't have read permission on your folder.
Run chmod 775 application-name to allow read in your folder.
You'll find additional info about chmod at this link: https://kb.iu.edu/d/abdb