I'm having the strangest bug I've ever seen with a linux system right now and there seem to be only two possible explanations for it -
Either appending sudo makes file writes instant
Or appending sudo produces a short delay in executing statements
Or I've got no clue what's happening with my program
Well let me give you some background. I'm currently writing a c++ program for raspberry pi gpio manipulation. There are no visible error in the program as far as I know & since it works with sudo successfully and with delays successfully too. So here's how rpi's gpio work -
First you've to export one, to reserve it for manipulation, it will create a new directory as gpio+number with several files in it.
echo 17 > /sys/class/gpio/export
Then set it's direction(in means read and out means write)
echo "out" > /sys/class/gpio/gpio17/direction
Then write the value (0 or 1 for off and on)
echo 1 > /sys/class/gpio/gpio17/value
At the end, unexport it back, the directory will get deleted.
echo 17 > /sys/class/gpio/unexport
It doesn't matter whether you do this through bash commands or through c/c++ or any other language IO, since in unix these are just files and you just need to read/write to them. Everything works fine till now. I've tested this manually and it works, so my manual test passes.
Now I've a simple test written for my program which looks like this -
TEST(LEDWrites, LedDevice)
{
Led led1(17, "MyLED");
// auto b = sleep(1);
EXPECT_EQ(true, led1.on());
}
The Led class constructor does the export part - echo 17 > /sys/class/gpio/export, while the .on() call sets the direction - echo "write" > /sys/class/gpio/gpio17/direction and outputs the value as well - echo 1 > /sys/class/gpio/gpio17/value. Forget about unexport here since it is handled by destructor and plays no role here.
If you're curious, these functions handle I/O like this -
{
const std::string direction = _dir ? "out" : "in";
const std::string path = GPIO_PATH + "/gpio" + std::to_string(powerPin) + "/direction";
std::ofstream dirStream(path.c_str(), std::ofstream::trunc);
if (dirStream) {
dirStream << direction;
} else {
// LOG error here
return false;
}
return true;
}
means basic c++ file/io. Now let me explain the bug.
First, here are 3 runs of same test -
Normal run FAILS
[isaac#alarmpi build]$ ./test/testexe
Running main() from gtest_main.cc
[==========] Running 2 tests from 2 test cases.
[----------] Global test environment set-up.
[----------] 1 test from LEDConstruction
[ RUN ] LEDConstruction.LedDevice
[ OK ] LEDConstruction.LedDevice (1 ms)
[----------] 1 test from LEDConstruction (1 ms total)
[----------] 1 test from LEDWrites
[ RUN ] LEDWrites.LedDevice
../test/test.cpp:20: Failure
Value of: led1.on()
Actual: false
Expected: true
[ FAILED ] LEDWrites.LedDevice (2 ms)
[----------] 1 test from LEDWrites (3 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 2 test cases ran. (6 ms total)
[ PASSED ] 1 test.
[ FAILED ] 1 test, listed below:
[ FAILED ] LEDWrites.LedDevice
1 FAILED TEST
run with sudo PASSES
[isaac#alarmpi build]$ sudo ./test/testexe
[sudo] password for isaac:
Running main() from gtest_main.cc
[==========] Running 2 tests from 2 test cases.
[----------] Global test environment set-up.
[----------] 1 test from LEDConstruction
[ RUN ] LEDConstruction.LedDevice
[ OK ] LEDConstruction.LedDevice (1 ms)
[----------] 1 test from LEDConstruction (2 ms total)
[----------] 1 test from LEDWrites
[ RUN ] LEDWrites.LedDevice
[ OK ] LEDWrites.LedDevice (2 ms)
[----------] 1 test from LEDWrites (2 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 2 test cases ran. (5 ms total)
[ PASSED ] 2 tests.
wtf delay run PASSES has uncommented // auto b = sleep(1);
[isaac#alarmpi build]$ ./test/testexe
Running main() from gtest_main.cc
[==========] Running 2 tests from 2 test cases.
[----------] Global test environment set-up.
[----------] 1 test from LEDConstruction
[ RUN ] LEDConstruction.LedDevice
[ OK ] LEDConstruction.LedDevice (1 ms)
[----------] 1 test from LEDConstruction (2 ms total)
[----------] 1 test from LEDWrites
[ RUN ] LEDWrites.LedDevice
[ OK ] LEDWrites.LedDevice (1001 ms)
[----------] 1 test from LEDWrites (1003 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 2 test cases ran. (1005 ms total)
[ PASSED ] 2 tests.
The only difference b/w delay and normal run is of single uncommented line - // auto b = sleep(1); Everything is same including device, directory structure, build conf and everything. The only things that explains this is linux might be creating that file and its friends sometimes later or it takes some time? and I call .on() before that. Well that could explain it...
But then why does sudo invocation with no delay passes? Does it makes those writes faster/instant or does it puts the delay statement by itself? Is this the cause of some kind of buffering? Please say no :/
If it matters, I'm using following dev rule for getting non-sudo access to gpio directory -
SUBSYSTEM=="bcm2835-gpiomem", KERNEL=="gpiomem", GROUP="gpio", MODE="0660"
SUBSYSTEM=="gpio", KERNEL=="gpiochip*", ACTION=="add", PROGRAM="/bin/sh -c 'chown root:gpio /sys/class/gpio/export /sys/class/gpio/unexport ; chmod 220 /sys/class/gpio/export /sys/class/gpio/unexport'"
SUBSYSTEM=="gpio", KERNEL=="gpio*", ACTION=="add", PROGRAM="/bin/sh -c 'chown root:gpio /sys%p/active_low /sys%p/direction /sys%p/edge /sys%p/value ; chmod 660 /sys%p/active_low /sys%p/direction /sys%p/edge /sys%p/value'"
EDIT - As #charles mentioned, I used std::flush after every write I made on I/O operations. Still failing.
Strace to the rescue
Let's see the execution of the failing build command -
open("/sys/class/gpio/export", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/unexport", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/export", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/gpio17/value", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = -1 EACCES (Permission denied)
open("/sys/class/gpio/gpio17/direction", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = -1 EACCES (Permission denied)
open("/sys/class/gpio/unexport", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
..., 0666) = -1 EACCES (Permission denied)
Okaaay, here's something, that explains why it is passing with sudo. But why is it passing with delay? Let's check that too,
open("/sys/class/gpio/export", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/unexport", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/export", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/gpio17/value", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/gpio17/direction", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 4
open("/sys/class/gpio/unexport", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
No wait, wtf? This means the permission denied must be for if files aren't created at that time. But how does using sudo solves that?
Here's relevant output for sudo -
open("/sys/class/gpio/export", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/unexport", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/export", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/gpio17/value", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/gpio17/direction", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 4
open("/sys/class/gpio/unexport", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
There is a race between udev and your program. When you write to /sys/class/gpio/export, the write will not return until the GPIO is fully created. However, once it has been created, you have two processes that simultaneously take action on the new device:
A hotplug/uevent triggers udev to evaluate its rules. As part of these rules, it will change the ownership and permissions of /sys/class/gpio/gpio17/value.
Your program continues. It will immediately try to open /sys/class/gpio/gpio17/value.
So there is some chance that your program will open the value file before udev has changed its ownership and permissions. This is in fact very likely, because your udev handler does an execve of a shell which then execve's chown and chmod. But even without that, the scheduler will normally give priority to the task that was already running when returning from a syscall, so your program will usually open the value file before udev has even woken up.
By inserting a sleep, you allow udev to do its thing. So to make it robust, you could poll the file with access() before opening it.
It would also help by giving udev higher priority. E.g. chrt -f -p $(pidof systemd-udevd) 3. This gives udev real-time priority, which means it will always run before your program. It can also make your system unresponsive so take care.
From your strace output
open("/sys/class/gpio/gpio17/value", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = -1 EACCES (Permission denied)
open("/sys/class/gpio/gpio17/direction", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = -1 EACCES (Permission denied)
You are first writing value, then direction.
Of course, you should first set the proper direction before writing the value.
Also, you should probably end your output
if (dirStream) {
dirStream << direction;
} else {
// LOG error here
return false;
}
with a newline.
The echo command also appends a newline.
if (dirStream) {
dirStream << direction << std::endl;
} else {
// LOG error here
return false;
}
(In this case, I would explicitly use std::endl to flush. Of course just adding '\n' works as well, but making the flush explicit makes the code more robust. As it is, you are now relying on the fact that the stream gets closed immediately after writing—which it might not if you later decide to keep the stream open until the end of your program.)
The missing trailing newline might explain why it works with a delay: after that delay, the driver might interpret the data as if there was a newline and assumes no more letters are waiting in the stream.
Related
I am using ctest with test_discover_tests to unit test my applications. It appears that ctest calls my test executable for each individual test (see below for an example). I believe this results in the default.profraw code coverage file being overwritten for each test, so I only get code coverage for the last test that was executed.
As an example:
ctest --verbose -R Test1|Test2
57: Test command: /home/user/project_dir/build_unit_test/project/project_test "--gtest_filter=TestUnit.Test1" "--gtest_also_run_disabled_tests"
57: Test timeout computed to be: 10000000
57: Running main() from gtest_main.cc
57: Note: Google Test filter = TestUnit.Test1
57: [==========] Running 1 test from 1 test case.
57: [----------] Global test environment set-up.
57: [----------] 1 test from TestUnit
57: [ RUN ] TestUnit.Test1
57: [ OK ] TestUnit.Test1 (6 ms)
57: [----------] 1 test from TestUnit (6 ms total)
57:
57: [----------] Global test environment tear-down
57: [==========] 1 test from 1 test case ran. (6 ms total)
57: [ PASSED ] 1 test.
9/10 Test #57: project.TestUnit.Test1 .............. Passed 0.05 sec
test 58
Start 58: project.TestUnit.Test2
58: Test command: /home/user/project_dir/build_unit_test/project/project_test "--gtest_filter=TestUnit.Test2" "--gtest_also_run_disabled_tests"
58: Test timeout computed to be: 10000000
58: Running main() from gtest_main.cc
58: Note: Google Test filter = TestUnit.Test2
58: [==========] Running 1 test from 1 test case.
58: [----------] Global test environment set-up.
58: [----------] 1 test from TestUnit
58: [ RUN ] TestUnit.Test2
58: [ OK ] TestUnit.Test2 (1 ms)
58: [----------] 1 test from TestUnit (1 ms total)
58:
58: [----------] Global test environment tear-down
58: [==========] 1 test from 1 test case ran. (2 ms total)
58: [ PASSED ] 1 test.
10/10 Test #58: project.TestUnit.Test2 .............. Passed 0.04 sec
In package.json I have 2 script commands:
"test:unit": "jest --watch --testNamePattern='^(?!\\[functional\\]).+$'",
"test:functional": "jest --watch --testNamePattern='\\[functional\\]'",
copying ^(?!\\[functional\\]).+$ into https://regex101.com/, it does not match the test string below inside argument 1 of describe()
describe("[functional] live tests", () => {
When changed to ([functional]).+$, the pattern does match. I have to remove a pair of \ on each end to remove escapes for .json files (I think).
Here is what I see when running npm run test:unit in my project root:
// the functional test runs (not desired)
$ npm run test:unit
functions/src/classes/__tests__/Functional.test.ts:30:47 - error TS2339: Property 'submit' does not exist on type 'Element'.
30 await emailForm.evaluate(form => form.submit());
~~~~~~
RUNS ...s/__tests__/Functional.test.ts
Test Suites: 1 failed, 1 skipped, 3 passed, 4 of 5 total
Tests: 2 skipped, 16 passed, 18 total
Snapshots: 0 total
Time: 8.965s, estimated 27s
Ran all test suites with tests matching "^(?!\[functional\]).+$".
Active Filters: test name /^(?!\[functional\]).+$/
The functional tests are not built out which explains the syntax error, it's not important here. The key issue, is why the tests were not skipped.
I believe the problem has to do with the regex negative matcher. The positive matcher without the ! only matches tests that have, or are nested in a describe block with [functional]
$ npm run test:functional
Test Suites: 1 failed, 4 skipped, 1 of 5 total
Active Filters: test name /\[functional\]/
Anyone know why the negative regex pattern is failing during npm run test:unit ?
Instead of a regex fix I changed the flag on the unit testing script to an ignore, then copying the matching pattern for [functional]:
"test:unit": "jest --watch --testIgnorePattern='\\[functional\\]'",
"test:functional": "jest --watch --testNamePattern='\\[functional\\]'",
Now I'm setting up a c++ test environment with CMake. Actually I've realized what I want to do, but I'm confused by 2 different test output style.
In my example below, what 'make test' do actually? I think both 'make test' and './test/Test' output are same, but not exactly. 'make test' output is different from googletest output style. Although test results looks same, I couldn't be satisfied with these output.
Output Differences
$ make test
Running tests...
Test project /path/to/sample/build
Start 1: MyTest
1/1 Test #1: MyTest ...........................***Failed 0.02 sec
0% tests passed, 1 tests failed out of 1
Total Test time (real) = 0.02 sec
The following tests FAILED:
1 - MyTest (Failed)
Errors while running CTest
make: *** [test] エラー 8
$ ./test/Test
Running main() from gtest_main.cc
[==========] Running 2 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 2 tests from MyLibTest
[ RUN ] MyLibTest.valCheck
/path/to/test/test.cc:10: Failure
Expected: sqr(1.0)
Which is: 1
To be equal to: 2.0
Which is: 2
[ FAILED ] MyLibTest.valCheck (0 ms)
[ RUN ] MyLibTest.negativeValCheck
[ OK ] MyLibTest.negativeValCheck (0 ms)
[----------] 2 tests from MyLibTest (0 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 1 test case ran. (0 ms total)
[ PASSED ] 1 test.
[ FAILED ] 1 test, listed below:
[ FAILED ] MyLibTest.valCheck
1 FAILED TEST
Commands
mkdir build
cd build
cmake ..
make test // NOT googletest output style
./test/Test // It looks googletest output
My Environment
root
- CMakeLists.txt
+ src/
- CMakeLists.txt
- main.cc
- sqr.cc
- sqr.h
+ test/
- CMakeLists.txt
- test.cc
root /CMakeLists.txt
cmake_minimum_required(VERSION 2.8)
project (MYTEST)
add_subdirectory(src)
add_subdirectory(test)
enable_testing()
add_test(NAME MyTest COMMAND Test)
test/CMakeLists.txt
cmake_minimum_required(VERSION 2.8)
set (CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR})
set(GTEST_ROOT /path/to/googletest/googletest)
include_directories(${GTEST_ROOT}/include/)
link_directories(${GTEST_ROOT}/build/)
add_executable(Test ${CMAKE_CURRENT_SOURCE_DIR}/test.cc)
target_link_libraries(Test sqr gtest gtest_main pthread)
test/test/cc
#include "../src/sqr.h"
#include <gtest/gtest.h>
namespace {
class MyLibTest : public ::testing::Test{};
TEST_F(MyLibTest, valCheck) {
EXPECT_EQ(sqr(3.0), 9.0);
EXPECT_EQ(sqr(1.0), 2.0); // it fails!
}
TEST_F(MyLibTest, negativeValCheck) {
EXPECT_EQ(sqr(-3.0), 9.0);
}
}
You can modify the behaviour of ctest (which is what make test will ultimately execute) with environment variables.
For example:
CTEST_OUTPUT_ON_FAILURE=1 make test
This will print full output for test executables that had a failure.
Another one you may be interested in is CTEST_PARALLEL_LEVEL
I'm new to deploying, so this is probably a rookie mistake, but here it goes.
I have a Rails 4 app that I'm deploying to a Linux server using a combination of Capistrano, Unicorn, and Nginx. The deploy script runs fine and the app is now reachable at the desired IP, so that's great. The thing is, a) Unicorn doesn't restart upon deployment (at least, the PIDs don't change) and b) not surprisingly, the new changes aren't reflected in the available app. I don't seem to be able to do anything other than completely stopping and restarting unicorn in order to refresh it. If I do this, then the changes are picked up, but this process is obviously not ideal.
Manually, if I run kill -s HUP $UNICORN_PID then the pids of the workers change but not the master, and changes aren't picked up (which, apparently they are supposed to be); using USR2 appears to have no effect on the current processes.
Here's the unicorn init script I'm using, based on suggestions from other stack overflow questions with similar problems:
set -e
USAGE="Usage: $0 <start|stop|restart|upgrade|rotate|force-stop>"
# app settings
USER="deploy"
APP_NAME="app_name"
APP_ROOT="/path/to/$APP_NAME"
ENV="production"
# environment settings
PATH="/home/$USER/.rbenv/shims:/home/$USER/.rbenv/bin:$PATH"
CMD="cd $APP_ROOT/current && bundle exec unicorn -c config/unicorn.rb -E $ENV -D"
PID="$APP_ROOT/shared/pids/unicorn.pid"
OLD_PID="$PID.oldbin"
TIMEOUT=${TIMEOUT-60}
# make sure the app exists
cd $APP_ROOT || exit 1
sig () {
test -s "$PID" && kill -$1 `cat $PID`
}
oldsig () {
test -s $OLD_PID && kill -$1 `cat $OLD_PID`
}
case $1 in
start)
sig 0 && echo >&2 "Already running" && exit 0
echo "Starting $APP_NAME"
su - $USER -c "$CMD"
;;
stop)
echo "Stopping $APP_NAME"
sig QUIT && exit 0
echo >&2 "Not running"
;;
force-stop)
echo "Force stopping $APP_NAME"
sig TERM && exit 0
echo >&2 "Not running"
;;
restart|reload)
sig HUP && echo "reloaded $APP_NAME" && exit 0
echo >&2 "Couldn't reload, starting '$CMD' instead"
run "$CMD"
;;
upgrade)
if sig USR2 && sleep 2 && sig 0 && oldsig QUIT
then
n=$TIMEOUT
while test -s $OLD_PID && test $n -ge 0
do
printf '.' && sleep 1 && n=$(( $n - 1 ))
done
echo
if test $n -lt 0 && test -s $OLD_PID
then
echo >&2 "$OLD_PID still exists after $TIMEOUT seconds"
exit 1
fi
exit 0
fi
echo >&2 "Couldn't upgrade, starting '$CMD' instead"
su - $USER -c "$CMD"
;;
rotate)
sig USR1 && echo rotated logs OK && exit 0
echo >&2 "Couldn't rotate logs" && exit 1
;;
*)
echo >&2 $USAGE
exit 1
;;
esac
Using this script, start and stop work as expected, but reload/restart do nothing (they print the expected output but don't change the running pids) and upgrade fails. According to the error log, it's because the first master is still running (ArgumentError: Already running on PID: $PID).
And here's my unicorn.rb:
app_path = File.expand_path("../..", __FILE__)
working_directory "#{app_path}"
pid "#{app_path}/../../shared/pids/unicorn.pid"
# listen
listen "#{app_path}/../../shared/sockets/unicorn.sock", :backlog => 64
# logging
stderr_path "#{app_path}/../../shared/log/unicorn.stderr.log"
stdout_path "#{app_path}/../../shared/log/unicorn.stdout.log"
# workers
worker_processes 3
# use correct Gemfile on restarts
before_exec do |server|
ENV['BUNDLE_GEMFILE'] = "#{working_directory}/Gemfile"
end
# preload
preload_app false
before_fork do |server, worker|
old_pid = "#{app_path}/shared/pids/unicorn.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
end
Any help is very much appreciated, thanks!
It is hard to say for certain, since I haven't encountered this particular issue before, but my hunch is that this is your problem:
app_path = File.expand_path("../..", __FILE__)
working_directory "#{app_path}"
Every time you deploy, Capistrano creates a new directory for your app at the location releases/<timestamp>. It then updates a current symlink to point at this latest release directory.
In your case, you may mistakenly be telling Unicorn to use releases/<timestamp> as its working_directory. (SSH to the server and check the contents of unicorn.rb to be certain.) Instead, what you should do is point to current. That way you don't have to stop and cold start unicorn to get it to see the new working directory.
# Since "current" is a symlink to the current release,
# Unicorn will always see the latest code.
working_directory "/var/www/my-app/current"
I suggest rewriting your unicorn.rb so that you aren't using relative paths. Instead hard-code the absolute paths to current and shared. It is OK to do this because those paths will remain the same for every release.
The line
ENV="production"
looks extremely suspicious to me. I suspect that it wants to be
RAILS_ENV="production".
without this won't rails wake up not knowing which environment it is?
I would like to run nload (a network throughput monitor) as a daemon on startup (or just automate in general). I can successfully run it as a daemon from the command line by typing this:
nload eth0 >& /dev/null &
Just some background: I modified the nload source code (written in C++) slightly to write to a file in addition to outputting to the screen. I would like to read the throughput values from the file that nload writes to. The reason I am outputting to /dev/null is so that I don't need to worry about the stdout output.
The weird thing is that, when I run it manually it runs just fine as a dameon and I am able to read throughput values from the file. But every attempt at automation has failed. I have tried init.d, rc.local, cron but no luck. The script I wrote to run this in automation is:
#!/bin/bash
echo "starting nload"
/usr/bin/nload eth0 >& /dev/null &
if [ $? -eq 0 ]; then
echo started nload
else
echo failed to start nload
fi
I can confirm that when automated, the script does run, since I tried logging the output. It even logs "started nload", but when I look at the list of processes running nload is not one of them. I can also confirm that when the script is run manually from the shell, nload starts up just fine as a daemon.
Does anyone know what could be preventing this program from running when run via an automated script?
looks like nload is crashing if it's not run from terminal.
viroos#null-linux:~$ cat /etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
strace -o /tmp/nload.trace /usr/bin/nload
exit 0
looks like HOME env var is missing:
viroos#null-linux:~$ cat /tmp/nload.trace
brk(0x1f83000) = 0x1f83000
write(2, "Could not retrieve home director"..., 34) = 34
write(2, "\n", 1) = 1
exit_group(1) = ?
+++ exited with 1 +++
lets fix this:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
export HOME=/tmp
strace -o /tmp/nload.trace /usr/bin/nload
exit 0
we have another problem:
viroos#null-linux:~$ cat /tmp/nload.trace
read(3, "\32\1\36\0\7\0\1\0\202\0\10\0unknown|unknown term"..., 4096) = 320
read(3, "", 4096) = 0
close(3) = 0
munmap(0x7f23e62c9000, 4096) = 0
ioctl(2, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 0x7ffedd149010) = -1 ENOTTY (Inappropriate ioctl for device)
ioctl(2, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 0x7ffedd148fb0) = -1 ENOTTY (Inappropriate ioctl for device)
write(2, "Error opening terminal: unknown."..., 33) = 33
exit_group(1) = ?
+++ exited with 1 +++
I saw you mentioned that you modified nload code but my guess is you haven't removed handling missing termin. You can try further editing nload code or use screen in detached mode:
viroos#null-linux:~$ cat /etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
export HOME=/tmp
screen -S nload -dm /usr/bin/nload
exit 0