Programmatically drop Linux cache as non-root user - c++

For testing purposes, I can drop cached memory by writing to the drop_caches file in Linux under the procfs. I can only do this as root. This is on embedded Linux so there is no sudo.
sync; echo 3 > /proc/sys/vm/drop_caches
I can write to the file programmatically in c++ by doing something from the post --> How to programmatically clear the filesystem memory cache in C++ on a Linux system?
sync();
std::ofstream ofs("/proc/sys/vm/drop_caches");
ofs << "3" << std::endl;
The challenge is wanting to do this while running the app as a non-root user. On reboot, the permissions look like:
# cd /proc/sys/vm/
# ls -lrt drop_caches
-rw-r--r-- 1 root root 0 Feb 13 19:50 drop_caches
And you cannot seem to change those permissions - even as root:
# chmod 777 drop_caches
chmod: drop_caches: Operation not permitted
# chown user:user drop_caches
chown: drop_caches: Operation not permitted
How can I accomplish this on Linux? Is it possible to change permissions of a procfs file? I can fully customize my kernel if necessary. Thanks -

You can create an auxiliary executable (be very careful, it is dangerous) which any user can run it with root permissions.
This is called setuid.
For safety reasons, you cannot setuid a shell script.
Extracting from the wiki how to use it:
The setuid and setgid bits are normally set with the command chmod by
setting the high-order octal digit to 4 (for setuid) or 2 (for
setgid). "chmod 6711 file" will set both the setuid and setgid bits
(2+4=6)
Update
As #rici noted, you still will need to have execution permission to execute this process, so you can remove execution permission from others and keep it only on group. So, only who is member of the group can execute it.

Related

$HOME is not set for ec2-user during commands in User Data run

I put the following commands in user data of an EC2 running RedHat 8 AMI (ami-0fc841be1f929d7d1), when they run, the mkdir tries to create .kube at root which looks to me like $HOME is not set at the time.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Following are log from /var/log/user-data.log
+ mkdir -p /.kube
+ sudo cp -i /etc/kubernetes/admin.conf /.kube/config
++ id -u
++ id -g
+ sudo chown 0:0 /.kube/config
When I SSH to the instance, the $HOME is set correctly to /home/ec2-user.
Could you advise what I did wrong here?
Thank you
When your EC2 server is provisioned, the user data script runs as user root, so $HOME is empty. What you could do, is to define the HOME env var at the top of your user data script, like this (insert your user's home directory here):
export HOME=/home/ubuntu
I've tried it and it works (I install NVM, SDKMAN, sbt, java, git, docker; all works fine). You might need to do some chown at the end of your user data script to change the owner of some files back to your user. For example, if your user data sets up some files in your home directory:
chown ubuntu ~/.foo/bar.properties
$HOME refers to the home directory of the logged in user. Userdata runs under the root user, and the root user $HOME is /. That is the result you are seeing.
Instead of the variable $HOME, your script should refer to /home as a literal.
See https://superuser.com/questions/271925/where-is-the-home-environment-variable-set
You are running as sudo which is known to change environment variables that are established with your users shell (such as $HOME) as well as shell context based such as ssh-agent.
Generally you can ensure this persists when you run sudo by adding it to the env_keep settings in your sudoers configuration by adding the below line within /etc/sudoers. More information is available here, be careful about modifying this file.
Defaults env_keep=HOME
Otherwise if you don't want to make the above change, ensure you have the permissions to carry this out without running sudo or pass an absolute path value in.
I would generally stay clear of user data for important configuration anyway,
instead build a pre-baked AMI ahead of time with the configuration how you want it, using a configuration tool such as Ansible, Chef, Puppet.
Alternatively as this is within the User Data anyway, it is unlikely you have already configured the sudoers configuration, you should instead just specify the path.
I faced the same issue. Adding this to the User Data script helped resolve it. The sub shells will have the HOME set with this change to profile.
cat > /etc/profile.d/set_home.sh << 'EOF'
export HOME=~
EOF
chmod a+x /etc/profile.d/set_home.sh

Singularity - Centos7 - Permission

i use singularity on Centos7 system, but i block on incomprehensible like-permission problem.
(It's a centrifuge/recentrifuge container made by myself that perfectly working on Ubuntu system)
command
singularity exec /HOMEPATH/Singularity/centrifuge_recentrifuge.simg centrifuge -x /HOMEPATH/Centrifuge/bacteria-database-centrifuge -1 /HOMEPATH/work_directory/centrifuge_recentrifuge/reads/TTOTO_R1_001.fastq.gz -2 /HOMEPATH/work_directory/centrifuge_recentrifuge/reads/TOTO_R2_001.fastq.gz -S /HOMEPATH/work_directory/centrifuge_recentrifuge/classification_result --report-file /HOMEPATH/work_directory/centrifuge_recentrifuge/classification_summary
 error log
Error: Could not open alignment output file /HOMEPATH/work_directory/centrifuge_recentrifuge/classification_result
Error: Encountered internal Centrifuge exception (#1)
Command: /usr/local/bin/centrifuge-class --wrapper basic-0 -x /HOMEPATH/Centrifuge/bacteria-database-centrifuge -S /HOMEPATH/work_directory/centrifuge_recentrifuge/classification_result --report-file /HOMEPATH/work_directory/centrifuge_recentrifuge/classification_summary -1 /tmp/229778.inpipe1 -2 /tmp/229778.inpipe2
(ERR): centrifuge-class exited with value 1
It's seeem like singularity can not write the tmp files or write the classification_result file or the both :/
work directory permission
ls -Z /HOMEPATH/work_directory/centrifuge_recentrifuge/
drwxr-xr-x. root root system_u:object_r:httpd_sys_content_t:s0 reads
ls -Z /HOMEPATH/work_directory/centrifuge_recentrifuge/reads/
-rw-r--r--. root root system_u:object_r:httpd_sys_content_t:s0 TOTO_R1_001.fastq.gz
-rw-r--r--. root root system_u:object_r:httpd_sys_content_t:s0 TOTO_R2_001.fastq.gz
EDIT 1 Resolve Permission Problem
ls -Z centrifuge_recentrifuge
drwxr-xr-x. apache apache system_u:object_r:httpd_sys_content_t:s0 reads
ls -Z centrifuge_recentrifuge/reads/
-rw-r--r--. apache apache system_u:object_r:httpd_sys_content_t:s0 TOTO_R1_001.fastq.gz
-rw-r--r--. apache apache system_u:object_r:httpd_sys_content_t:s0 TOTO_R2_001.fastq.gz
And the error is still the same...
I have make a sudo chown -R apache:apache /tmp on the tmp folder but it's does not make effect :/
ls -Z /HOMEPATH/work_directory/centrifuge_recentrifuge/
drwxr-xr-x. root root system_u:object_r:httpd_sys_content_t:s0 reads
This says that the only the owner of the centrifuge_recentrifuge directory has permission to create items in that directory, and that the owner of the directory is the user root. So, only root can create items in that directory.
Presumably you were not running the singularity program while logged in as root and that's why the program was unable to create a classification_result file. It wouldn't have been able to create a classification_summary file either, if it had got as far as trying to do that.
I don't know if you have a special reason for having this directory owned by root. If you do, then the only way the program is going to be able to create these files is if you run it as the root user. Of course it's generally a bad idea to use the root account for anything other than system administration.
The usual approach is to have the HOMEPATH directory, and everything below it, be owned by the individual (non-root) user for whom that particular HOMEPATH was created. In this model, that individual user would be the owner of the centrifuge_recentrifuge directory, and therefore if you run the singularity program when logged in as that user it will be able to create whatever files it needs there.
To get to that situation from where you are now, to change the ownership of HOMEPATH and everything beneath it you can log in as root (or use sudo) and then run:
chown -R myuser /HOMEPATH
where myuser is the username of the account that has HOMEPATH as its home directory.
That should be enough to let the program run. However, for completeness you should also change the group ownership of HOMEPATH and everything beneath it to match the individual user's group. To do that, run:
chown -R myuser:mygroup /HOMEPATH
where mygroup is the group that contains the user myuser. If you don't know what that group name should be, log in as myuser and run the id -ng command. It's common to have the group name be the same as the user name, so don't be surprised if the result of that id command is the same as myuser. On some systems you can run:
chown -R myuser: /HOMEPATH
with just a colon : after myuser and the command will figure out the group name for you. If that works on your system then you don't need to do the id -ng dance.

ls: cannot open directory '.': Permission denied

I have my application running on an ec2 instance.
I can successfully ssh into my application but when I cd in to the correct folder and run ls I get the following error:
ls: cannot open directory '.': Permission denied
It seems like it has something to do with my user permissions because running the application also throws a 403 Forbidden error.
The permissions for my application folder are as follows:
d-wx-wx--x 17 ubuntu ubuntu 4096 Apr 20 10:53 application-name
Do I need to change this to something else to make it work? And how?
This error makes sense if you don't have enough privileges to read that directory. try changing the permissions for current user or change the access mode to global i.e 777
For example:
sudo bash
chmod 775 .
This is basically caused when the current user doesn't have enough permission to read/write/execute the contents of that directory.
Here's how you can fix it:
To grant the user permission to just the current directory, you could do this:
sudo chmod 775 directory_name
OR
sudo chmod a+rwx,o-w directory_name
To grant the user permission to the current directory, it's subdirectories and files, you could do this:
sudo chmod -R 775 directory_name
OR
sudo chmod -R a+rwx,o-w directory_name
Note:
chmod means change mode or in a more literal sense change access permissions.
-R means change files and directories recursively.
a means all users
r means read permission
w means write permission
x means execute permission
o means others
+ means add
- means remove.
So it means recursively add read, write and execute permissions to everyone, but then remove write permissions from others.
That's all.
I hope this helps
You don't have read permission on your folder.
Run chmod 775 application-name to allow read in your folder.
You'll find additional info about chmod at this link: https://kb.iu.edu/d/abdb

uWSGI process doesn't inherit permissions associated with group its uid belongs to

I have a folder that I want to write to from a Django app using uwsgi (served by NGINX). I set ownership on that folder to root:writinggroup and set permissions on that folder to 775. I add the www-data user to the group writinggroup.
Then in my uwsgi ini file, I set:
uid = www-data
But when I run my server and hit the appropriate URL to trigger the write operation, I get a permissions error.
But if I switch the ownership of the folder to www-data:writinggroup, everything works perfectly.
So what's going on here? Why is it that having the user-owner of the folder set to www-data gets the job done, while setting the group-owner of the folder to writinggroup doesn't even though www-data (the user) is a member of that group?
Basically, what I'm asking is: if you set uid but not gid in the uwsgi config, why doesn't the uwsgi process behave like it inherits permissions associated with groups to which that uid belongs?
Following up on dgel's suggestion to learn about Unix permissioning: when you run processes in Unix, there are basically 3 ways you can wind up being "green lit" permissions-wise by the OS.
The user calling the command can be user-permissioned for the disk operation (green-lit because of effective user id)
The group id associated with the process can be group-permissioned for the disk operation (green-lit because of effective group id)
The user calling the command can be a member of a group that is group-permissioned for the disk operation (green-lit because of supplementary group id)
Number 3 there is important. It's how a lot of disk operations are ultimately green-lit. I.e. user joe is a member of group awesome. Folder important is owned by root:awesome with permissions 775. You execute a command at the terminal as joe where you try to write to important. This will be run with user joe and likely with effective group id joe (i.e. the group that contains only the user joe). Alone, this wouldn't get the job done. But because joe is in awesome and awesome the group has write permissions, your command is able to succeed.
When you run uWSGI in emperror mode, you can set a uid and gid for your vassal processes. I had assumed that this would result in vassals with the 3 types of permissions described above. But that's not quite right. While the vassals do indeed run with the effective user ID and effective group ID you specify, they do not wind up with any supplementary group ID's associated to their processes.
So say you have uWSGI running with uid=www-data and gid=www-data and you want to write to the important folder described above. So you make www-data a member of awesome. This will not be sufficient, because the vassal process will have only the permissions of www-data the user and www-data the group and not the permissions of the groups to which www-data (the user) belongs...i.e. it will not inherit the permissions of the awesome group. This leads to the annoying behavior that commands executed at the terminal after switching users to www-data may succeed while code run by the above-configured uWSGI process will fail (because at the terminal the command gets the permissions of the awesome group, but the vassals do not).
One solution would be to change the ownership of the important folder to be www-data:awesome. But I hate that answer since it doesn't generalize to a case where multiple users might need this kind of access. Instead, there's a better way: there is an add-gid option for uWSGI. So you would need to specify:
uid = www-data
gid = www-data
add-gid = awesome
in your uWSGI configuration. This parameter can be set many times so that you can associate as many supplementary groups with your vassal processes as your heart desires. You can read about it in the uWSGI release notes here.
A very important note is that this parameter was only added in uWSGI 1.9.15. This is much newer than the version that ships with Ubuntu. So if you (like me) are in that situation, you'll need to upgrade uWSGI. I did that with:
sudo mv /usr/bin/uwsgi /usr/bin/uwsgi.bak
sudo pip install -U uwsgi
sudo ln -fs /usr/local/bin/uwsgi /usr/bin/uwsgi
Quick server reboot (sudo service uwsgi restart) and I was all set!

Setting write permissions on Django Log files

I have a set of Django Log files, for which I have the appropriate logger set to write out messages. However each time it creates a new log file, the permissions on the file don't allow me to start the shell, and at times cause issues with apache.
I have ran chmod -Rv 777 on the directory, which sets all the permissions so we can do what we like, but the next logfile created, goes back to some default.
How can I set permissions on the logfiles to be created
Marc
Permissions on files created by a particular user depend on what mask is set for this particular user.
Now you need to set the appropriate permissions for whoever is running the apache service
ps -aux | grep apache | awk '{ print $1 }'
Then for this particular user running apache (www-data?)
sudo chown -R your_user:user_running_apache directory
where directory is the root directory of your django application.
To make sure that all the files that will be added to this directory in the future have
the correct permissions run:
sudo chmod -R g+s directory
I faced with the same problem - I had issues with starting shell and with celery due to rotated-log file permissions. I'm running my django-project through the uwsgi (which is running by www-data user) - so I handled it by setting umask for it (http://uwsgi-docs.readthedocs.org/en/latest/Options.html#umask).
Also I'm using buildout, so my fix looks like this:
[uwsgi]
recipe = buildout.recipe.uwsgi
xml-socket = /tmp/uwsgi.sock
xml-master = True
xml-chmod-socket = 664
xml-umask = 0002
xml-workers = 3
xml-env = ...
xml-wsgi-file = ...
After this log file permissions became 664, so group members of www-data group can also write into it.