I am trying to grant reboot capability to my appimage using setcap. Using following command on a simple application (all it does is to reboot the machine) works, however it does not work with my actual app's appimage.
Both applications essentially do the same thing for rebooting:
sync();
if(reboot(RB_AUTOBOOT) == -1) {
// handle errno
}
(In case you want to try the code, include <unistd.h> and <sys/reboot.h>)
Output with minimal test app:
sudo setcap -v cap_sys_boot+ep ./main
./main: OK
Output with appimage:
sudo setcap -v cap_sys_boot+ep ./app.AppImage
./app.AppImage differs in [pe]
Any idea what can I do?
Related
I'm working on a Yocto based system. My problem is that I can't start my programm written in C++ and the webserver (node.js) at the same time right after the boot of my device.
I already tried this in /etc/init.d:
#! /bin/bash
/home/ProjectFolder/myProject
cd /home/myapp && DEBUG=myapp:* npm start
exit 0
I changed the rights after creating the script by
chmod +x ./startProg.sh
After that I linked it by
update-rc.d startProg.sh defaults
After reboot the system only starts the C++-programm. I tried some other possibilities like seperating the two comands in different shell-scripts, but that didn't work out any better.
Is there any option I missed or did I make any mistake trying to put those two processes into the autostart?
This of course isn't a C++ or Node.js question. A shell script is a list of commands that are executed in order, unless specified otherwise. So your shell script runs your two programs in the order specified, first myProject and when that's done npm will be started.
This is the same as what would happen from the prompt and the solution is the same: /home/ProjectFolder/myProject &
Background
I created a simple Hello World C++ program:
#include <iostream>
using namespace std;
int main() {
cout << "Hello World!" << endl;
return 0;
}
And compiled it with clang++ like so (g++ points to clang++ on OS X apparently):
g++ helloworld-cpp.cpp
This produces an executable, a.out. Running it at the prompt causes bash to throw the error Operation not permitted, as shown:
$ ./a.out
-bash: ./a.out: Operation not permitted
Things I've Tried
Verifying the file has execute permissions, and no attributes or flags that would prevent it from running, using ls -leO:
-rwxr-xr-x 1 monarch staff - 15212 Jan 1 13:51 a.out
Disabling "System Integrity Protection" using csrutil disable from the Recovery OS terminal, rebooting, recompiling, and running a.out. The same error messages results.
Question
Are there any other restrictions that could prevent binaries I compile on Mac OS X from running?
Figured it out.
My code was on an encrypted sparseimage, which had the quarantined attribute set on it. I checked this by running mount like so (see attributes on /Volumes/work):
$ mount
/dev/disk0s2 on / (hfs, local, journaled)
/dev/disk2s2 on /Volumes/work (hfs, local, nodev, nosuid, journaled, noowners, quarantine, mounted by monarch)
The actual sparseimage is located in my home folder, titled work.sparseimage. I removed the quarantine attribute like so:
$ xattr -d com.apple.quarantine work_personal.sparseimage
I then unmounted (ejected) the image, then re-mounted it, recompiled the file and it executed without the error.
Special thanks to #Mark Setchell for asking me in the question's comments if noexec was set on the drive, and to everyone else for their suggestions.
I'm trying to run my application as root because I need to access low level hardware on my computer.
When I run the command:
./application_name
...it works, except gives an error that it needs root. However, when I run this:
sudo ./application_name
...I get no terminal ouput.
I've tested that every time that I run an executable on Linux as root, it doesn't print anything to terminal. How can I fix this?
Edit: somewhat of a test case provided (mobile so can't type out much):
sudo g++ test.cpp -o executable
sudo chmod +x executable
This works on Debian:
./executable
This doesn't:
sudo ./executable
test.cpp:
#include <iostream>
int main() {
std::cout << "Hello World!";
}
That behavior is really strange. Root permissions for that application should have no effect on std output.
For example, I made a simple test, a "hello world" that I ran as root on Debian OS and I had output in terminal.
A simple test to convince yourself that you should have the output, is to make a redirect to a file. For example sudo ./executable > output.txt and you'll see that everything should be OK.
Note that it should be strange if you don't have output from a simple "hello world".
This question already has answers here:
Piping (or command chaining) with QProcess
(5 answers)
Closed 8 years ago.
I want to restart the computer by running a command in linux using QProcess. I have hard-coded my root password in my application.
When i run the following in a terminal it works perfect:
echo myPass | sudo -S shutdown -r now
When i put the command in a shell script and call it via QProcess it is also successful :
QProcess process;
process.startDetached("/bin/sh", QStringList()<< "myScript.sh");
But i can not run it by directly passing to QProcess:
process.startDetached("echo myPass | sudo -S shutdown -r now ");
It will just print myPass | sudo -S shutdown -r now
How is it possible to run such relatively complex commands directly using QProcess. (Not putting in a shell script).
The key methods that exist for this purpose established in QProcess:
void QProcess::setProcessChannelMode(ProcessChannelMode mode)
and
void QProcess::setStandardOutputProcess(QProcess * destination)
Therefore, the following code snippet would be the equivalence of command1 | command2 without limiting yourself to one interpreter or another:
QProcess process1
QProcess process2;
process1.setStandardOutputProcess(&process2);
process1.start("echo myPass");
process2.start("sudo -S shutdown -r now");
process2.setProcessChannelMode(QProcess::ForwardedChannels);
// Wait for it to start
if(!process1.waitForStarted())
return 0;
bool retval = false;
QByteArray buffer;
// To be fair: you only need to wait here for a bit with shutdown,
// but I will still leave the rest here for a generic solution
while ((retval = process2.waitForFinished()));
buffer.append(process2.readAll());
if (!retval) {
qDebug() << "Process 2 error:" << process2.errorString();
return 1;
}
You could drop the sudo -S part because you could run this small program as root, as well as setting up the rights. You could even set setuid or setcap for the shutdown program.
What we usually do when building commercial Linux systems is to have a minimal application that can get setuid or setcap for the activity it is trying to do, and then we call that explicitly with system(3) or QProcess on Linux. Basically,
I would write that small application to avoid giving full root access to the whole application, so to restrict the access right against malicious use as follows:
sudo chmod u+s /path/to/my/application
First, you could configure sudo to avoid asking you the password. For instance by being member of the sudo group and having the line
%sudo ALL=NOPASSWD: ALL
in your /etc/sudoers file. Of course not asking the password lowers the security of your system.
To answer your question about Qt, remember that bash(1), like all Posix shells, hence /bin/sh, accept the -c argument with a string (actually system(3) is forking a /bin/sh -c). So just execute
process.startDetached("/bin/sh", QStringList()<< "-c"
<< "echo myPass | sudo -S shutdown -r now");
As AntiClimacus answered, puting your root password inside an executable is a bad idea.
You must put your command in a shell script and execute sh or bash with QProcess with your shell script as argument, because your command contains |, which must be interpreted by sh or bash.
However, it's just my opinion, but: I don't think it is a good solution to do what you are doing, i.e. include your root password in an executable.
I have a git repository that needs to run a post-receive hook as sudo. The binary that I compiled to test this looks like:
#include <stdlib.h>
#include <unistd.h>
#include <stdio.h>
int main() {
int ret;
ret = setuid(geteuid());
if(!ret) {
fprintf(stderr, "error setting uid %d \n", ret);
}
system("[...command only sudo can access...]");
return 0;
}
The geteuid() retrieves the owner id of post-receive, then tries to setuid. When running this with any user(including the super user) it runs the script correctly as root. However, when triggered by the git hook the systems fail to set the uid. I have tried running chmod u+s post-receive I also tried some other configurations, but I am running out of ideas. Any reason why it would work in all cases except when git triggers it?
btw, platform Ubuntu Server 9.04(2.6.28-15), git1.6.0.4, gcc version 4.3.3 (Ubuntu 4.3.3-5ubuntu4)
The file system where the git repo is stored may be mounted with the nosuid option
If you are pushing over ssh the suid capability may be disabled for commands invoked with ssh (no CAP_SETUID)
In any case, what you are trying to do is very inadvisable.
Run your program as a daemon.
Wait for input on a socket/named pipe/msgq.
In the hook, send a message to your daemon with whatever info it needs to perform the operation.
If needed, send a message back to the hook with status.
This will likely be easier to manage and secure properly.
try running your program from the command prompt
Try writing a bootstrap script. ie
#/usr/bin/sh
./your_program
Then make make the script the hook.