Singularity - Centos7 - Permission - centos7

i use singularity on Centos7 system, but i block on incomprehensible like-permission problem.
(It's a centrifuge/recentrifuge container made by myself that perfectly working on Ubuntu system)
command
singularity exec /HOMEPATH/Singularity/centrifuge_recentrifuge.simg centrifuge -x /HOMEPATH/Centrifuge/bacteria-database-centrifuge -1 /HOMEPATH/work_directory/centrifuge_recentrifuge/reads/TTOTO_R1_001.fastq.gz -2 /HOMEPATH/work_directory/centrifuge_recentrifuge/reads/TOTO_R2_001.fastq.gz -S /HOMEPATH/work_directory/centrifuge_recentrifuge/classification_result --report-file /HOMEPATH/work_directory/centrifuge_recentrifuge/classification_summary
 error log
Error: Could not open alignment output file /HOMEPATH/work_directory/centrifuge_recentrifuge/classification_result
Error: Encountered internal Centrifuge exception (#1)
Command: /usr/local/bin/centrifuge-class --wrapper basic-0 -x /HOMEPATH/Centrifuge/bacteria-database-centrifuge -S /HOMEPATH/work_directory/centrifuge_recentrifuge/classification_result --report-file /HOMEPATH/work_directory/centrifuge_recentrifuge/classification_summary -1 /tmp/229778.inpipe1 -2 /tmp/229778.inpipe2
(ERR): centrifuge-class exited with value 1
It's seeem like singularity can not write the tmp files or write the classification_result file or the both :/
work directory permission
ls -Z /HOMEPATH/work_directory/centrifuge_recentrifuge/
drwxr-xr-x. root root system_u:object_r:httpd_sys_content_t:s0 reads
ls -Z /HOMEPATH/work_directory/centrifuge_recentrifuge/reads/
-rw-r--r--. root root system_u:object_r:httpd_sys_content_t:s0 TOTO_R1_001.fastq.gz
-rw-r--r--. root root system_u:object_r:httpd_sys_content_t:s0 TOTO_R2_001.fastq.gz
EDIT 1 Resolve Permission Problem
ls -Z centrifuge_recentrifuge
drwxr-xr-x. apache apache system_u:object_r:httpd_sys_content_t:s0 reads
ls -Z centrifuge_recentrifuge/reads/
-rw-r--r--. apache apache system_u:object_r:httpd_sys_content_t:s0 TOTO_R1_001.fastq.gz
-rw-r--r--. apache apache system_u:object_r:httpd_sys_content_t:s0 TOTO_R2_001.fastq.gz
And the error is still the same...
I have make a sudo chown -R apache:apache /tmp on the tmp folder but it's does not make effect :/

ls -Z /HOMEPATH/work_directory/centrifuge_recentrifuge/
drwxr-xr-x. root root system_u:object_r:httpd_sys_content_t:s0 reads
This says that the only the owner of the centrifuge_recentrifuge directory has permission to create items in that directory, and that the owner of the directory is the user root. So, only root can create items in that directory.
Presumably you were not running the singularity program while logged in as root and that's why the program was unable to create a classification_result file. It wouldn't have been able to create a classification_summary file either, if it had got as far as trying to do that.
I don't know if you have a special reason for having this directory owned by root. If you do, then the only way the program is going to be able to create these files is if you run it as the root user. Of course it's generally a bad idea to use the root account for anything other than system administration.
The usual approach is to have the HOMEPATH directory, and everything below it, be owned by the individual (non-root) user for whom that particular HOMEPATH was created. In this model, that individual user would be the owner of the centrifuge_recentrifuge directory, and therefore if you run the singularity program when logged in as that user it will be able to create whatever files it needs there.
To get to that situation from where you are now, to change the ownership of HOMEPATH and everything beneath it you can log in as root (or use sudo) and then run:
chown -R myuser /HOMEPATH
where myuser is the username of the account that has HOMEPATH as its home directory.
That should be enough to let the program run. However, for completeness you should also change the group ownership of HOMEPATH and everything beneath it to match the individual user's group. To do that, run:
chown -R myuser:mygroup /HOMEPATH
where mygroup is the group that contains the user myuser. If you don't know what that group name should be, log in as myuser and run the id -ng command. It's common to have the group name be the same as the user name, so don't be surprised if the result of that id command is the same as myuser. On some systems you can run:
chown -R myuser: /HOMEPATH
with just a colon : after myuser and the command will figure out the group name for you. If that works on your system then you don't need to do the id -ng dance.

Related

$HOME is not set for ec2-user during commands in User Data run

I put the following commands in user data of an EC2 running RedHat 8 AMI (ami-0fc841be1f929d7d1), when they run, the mkdir tries to create .kube at root which looks to me like $HOME is not set at the time.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Following are log from /var/log/user-data.log
+ mkdir -p /.kube
+ sudo cp -i /etc/kubernetes/admin.conf /.kube/config
++ id -u
++ id -g
+ sudo chown 0:0 /.kube/config
When I SSH to the instance, the $HOME is set correctly to /home/ec2-user.
Could you advise what I did wrong here?
Thank you
When your EC2 server is provisioned, the user data script runs as user root, so $HOME is empty. What you could do, is to define the HOME env var at the top of your user data script, like this (insert your user's home directory here):
export HOME=/home/ubuntu
I've tried it and it works (I install NVM, SDKMAN, sbt, java, git, docker; all works fine). You might need to do some chown at the end of your user data script to change the owner of some files back to your user. For example, if your user data sets up some files in your home directory:
chown ubuntu ~/.foo/bar.properties
$HOME refers to the home directory of the logged in user. Userdata runs under the root user, and the root user $HOME is /. That is the result you are seeing.
Instead of the variable $HOME, your script should refer to /home as a literal.
See https://superuser.com/questions/271925/where-is-the-home-environment-variable-set
You are running as sudo which is known to change environment variables that are established with your users shell (such as $HOME) as well as shell context based such as ssh-agent.
Generally you can ensure this persists when you run sudo by adding it to the env_keep settings in your sudoers configuration by adding the below line within /etc/sudoers. More information is available here, be careful about modifying this file.
Defaults env_keep=HOME
Otherwise if you don't want to make the above change, ensure you have the permissions to carry this out without running sudo or pass an absolute path value in.
I would generally stay clear of user data for important configuration anyway,
instead build a pre-baked AMI ahead of time with the configuration how you want it, using a configuration tool such as Ansible, Chef, Puppet.
Alternatively as this is within the User Data anyway, it is unlikely you have already configured the sudoers configuration, you should instead just specify the path.
I faced the same issue. Adding this to the User Data script helped resolve it. The sub shells will have the HOME set with this change to profile.
cat > /etc/profile.d/set_home.sh << 'EOF'
export HOME=~
EOF
chmod a+x /etc/profile.d/set_home.sh

Django server error 403 forbidden nginx/1.10.3 (ubuntu)

I have some media content in ubuntu server. I can upload files. but when I try to load files it shows 403 forbidden nginx/1.10.3 (ubuntu).In file permission, it displays rw--------.
How can I retrieve all content without error?
I'm not familiar with Ubuntu
I used this snippet to recover files. However, it only works the single time. After some while, it shows the same error.
sudo chmod -R 664 /home/django/media/image/
sudo chmod -R a+X /home/django/media/image/
The nginx user must be able to read those files. You can use group permissions to allow that. Also the wsgi user must have its umask set so that files it creates are readable for the group as well.
In your case it looks like your wsgi user has umask 077, which makes files it creates only readable by the owner (rw--------). Thus nginx does not have read permission. Instead use umask 027, which will permit group users to access those files, but not write to them (there's no reason for nginx to have write access).
For example if you are using gunicorn as your wsgi server, you can use gunicorn flags --group www --umask 027. Make sure both gunicorn and nginx user belongs to the www group.
Fix permission something like this.
# set group to `www` for all files recursively
sudo chgrp www -R /home/django/media/
# set all files to be read/write by owner and readable by group `www`
find /home/django/media/ -type f -exec chmod 640 {} ;
# same with directories +execute
find /home/django/media/ -type d -exec chmod 750 {} ;
Alternatively, use 644 for files and 755 for directories, and 022 for umask. Then group permissions don't matter, since all users gets read access.
The latter option is not security best practice, but it's probably fine, as long as you only give the django user write access.

ls: cannot open directory '.': Permission denied

I have my application running on an ec2 instance.
I can successfully ssh into my application but when I cd in to the correct folder and run ls I get the following error:
ls: cannot open directory '.': Permission denied
It seems like it has something to do with my user permissions because running the application also throws a 403 Forbidden error.
The permissions for my application folder are as follows:
d-wx-wx--x 17 ubuntu ubuntu 4096 Apr 20 10:53 application-name
Do I need to change this to something else to make it work? And how?
This error makes sense if you don't have enough privileges to read that directory. try changing the permissions for current user or change the access mode to global i.e 777
For example:
sudo bash
chmod 775 .
This is basically caused when the current user doesn't have enough permission to read/write/execute the contents of that directory.
Here's how you can fix it:
To grant the user permission to just the current directory, you could do this:
sudo chmod 775 directory_name
OR
sudo chmod a+rwx,o-w directory_name
To grant the user permission to the current directory, it's subdirectories and files, you could do this:
sudo chmod -R 775 directory_name
OR
sudo chmod -R a+rwx,o-w directory_name
Note:
chmod means change mode or in a more literal sense change access permissions.
-R means change files and directories recursively.
a means all users
r means read permission
w means write permission
x means execute permission
o means others
+ means add
- means remove.
So it means recursively add read, write and execute permissions to everyone, but then remove write permissions from others.
That's all.
I hope this helps
You don't have read permission on your folder.
Run chmod 775 application-name to allow read in your folder.
You'll find additional info about chmod at this link: https://kb.iu.edu/d/abdb

Setting write permissions on Django Log files

I have a set of Django Log files, for which I have the appropriate logger set to write out messages. However each time it creates a new log file, the permissions on the file don't allow me to start the shell, and at times cause issues with apache.
I have ran chmod -Rv 777 on the directory, which sets all the permissions so we can do what we like, but the next logfile created, goes back to some default.
How can I set permissions on the logfiles to be created
Marc
Permissions on files created by a particular user depend on what mask is set for this particular user.
Now you need to set the appropriate permissions for whoever is running the apache service
ps -aux | grep apache | awk '{ print $1 }'
Then for this particular user running apache (www-data?)
sudo chown -R your_user:user_running_apache directory
where directory is the root directory of your django application.
To make sure that all the files that will be added to this directory in the future have
the correct permissions run:
sudo chmod -R g+s directory
I faced with the same problem - I had issues with starting shell and with celery due to rotated-log file permissions. I'm running my django-project through the uwsgi (which is running by www-data user) - so I handled it by setting umask for it (http://uwsgi-docs.readthedocs.org/en/latest/Options.html#umask).
Also I'm using buildout, so my fix looks like this:
[uwsgi]
recipe = buildout.recipe.uwsgi
xml-socket = /tmp/uwsgi.sock
xml-master = True
xml-chmod-socket = 664
xml-umask = 0002
xml-workers = 3
xml-env = ...
xml-wsgi-file = ...
After this log file permissions became 664, so group members of www-data group can also write into it.

Programmatically drop Linux cache as non-root user

For testing purposes, I can drop cached memory by writing to the drop_caches file in Linux under the procfs. I can only do this as root. This is on embedded Linux so there is no sudo.
sync; echo 3 > /proc/sys/vm/drop_caches
I can write to the file programmatically in c++ by doing something from the post --> How to programmatically clear the filesystem memory cache in C++ on a Linux system?
sync();
std::ofstream ofs("/proc/sys/vm/drop_caches");
ofs << "3" << std::endl;
The challenge is wanting to do this while running the app as a non-root user. On reboot, the permissions look like:
# cd /proc/sys/vm/
# ls -lrt drop_caches
-rw-r--r-- 1 root root 0 Feb 13 19:50 drop_caches
And you cannot seem to change those permissions - even as root:
# chmod 777 drop_caches
chmod: drop_caches: Operation not permitted
# chown user:user drop_caches
chown: drop_caches: Operation not permitted
How can I accomplish this on Linux? Is it possible to change permissions of a procfs file? I can fully customize my kernel if necessary. Thanks -
You can create an auxiliary executable (be very careful, it is dangerous) which any user can run it with root permissions.
This is called setuid.
For safety reasons, you cannot setuid a shell script.
Extracting from the wiki how to use it:
The setuid and setgid bits are normally set with the command chmod by
setting the high-order octal digit to 4 (for setuid) or 2 (for
setgid). "chmod 6711 file" will set both the setuid and setgid bits
(2+4=6)
Update
As #rici noted, you still will need to have execution permission to execute this process, so you can remove execution permission from others and keep it only on group. So, only who is member of the group can execute it.