I'm working with several external processes in my application. During tests the stderr output of several of these is output inline with my test messages and results. I can do this:
mix test --trace 2> error.log
However, when I do, I lose all my lovely colors. Also some Elixir errors still appear, though not all (which is fine for me).
Is there a better way to suppress the errors of the external programs without affecting the mix output? Is it even a good idea?
Or should my tests not actually interact with the real command-line utilities? I ask because at that point I'm honestly no longer clear on what I'm testing.
UPDATE:
Below is a simplified function and test to illustrate the concept better.
The function:
#doc "Function takes a pre-routing rule as a string and adds it with iptables"
def addrule(pre_routing_rule)
%Porcelain.Result{out: _output, status: status} = Porcelain.shell("sudo iptables -t nat -A #{pre_routing_rule}")
end
The test:
test "Removing a non-existent rule fails" do
Rules.clear
assert {:error, :eiptables} == Rules.remove("PREROUTING -p tcp --dport 9080 -j DNAT --to-destination 192.168.1.3:9080")
end
This test passes perfectly. However inline with the test messages it also outputs iptables: No chain/target/match by that name.. The exact position of the message is also unpredictable and en masse these messages make test information very hard to read. Then I redirect stderr and I lose my colour-coding for some reason which also makes it hard to follow test results.
I think you should suppress these error messages at the point where you are calling the commandline utilities for example using the same method as in the question.
The lack of colors is connected to how Elixir detects ANSI:
[ANSI support] is by default false unless Elixir can detect during startup that both stdout and stderr are terminals.
I'd suggest in general that you decide how to deal with stderr in your system calls. What's happening is that the test framework catches stdout, but not stderr.
You can change how stderr is dealt with in the initial Porcelain call See
Porcelain API Docs
Related
I'd like to get the number of tests that were run with go test, as kind of a checksum to detect if all the tests are running. Since Go relies on filenames and method names to determine what's a test, it's easy to mistype something, which would mean the test would silently be skipped.
I think that the gotestsum tool is close to what you are looking for.
It is a wrapper around go test that prints formatted test output and a summary of the test run.
Default go test:
go test ./...
? github.com/marco-m/timeit/cmd/sleepit [no test files]
ok github.com/marco-m/timeit/cmd/timeit 0.601s
Default gotestsum:
gotestsum
∅ cmd/sleepit
✓ cmd/timeit (cached)
DONE 11 tests in 0.273s <== see here
Checkout the documentation and the built-in help, it is well-written.
In my experience, gotestsum (and the other tools by the same organization) is good. For me, it is also very important to be able to use the standard Go test package, without other "test frameworks". gotestsum allows me to do so.
On the other hand, to really satisfy your requirement (print the number of declared tests and verify that that number is actually ran), you would need something like TAP, the Test Anything Protocol, which works for any programming language:
1..4 <== see here
ok 1 - Input file opened
not ok 2 - First line of the input valid
ok 3 - Read the rest of the file
not ok 4 - Summarized correctly # TODO Not written yet
TAP actually is very nice and simple. I remember there was a Go port, tap-go, but it is now marked as archived.
Is there a way in C++ to tell if std::cout and std::cerr are pointing to the same destination?
That is, I'd like to be able to distinguish when the program is launched like
program or program > log 2>&1 or program &> log
versus
program > log or program 2> errors or program > log 2> errors
(The use case is a situation where we'd like error information to be printed to both stdout and stderr when they are separate, but want to print a slightly differently formatted output (not just a concatenation) if they both go to the same destination. -- Yes, I'm aware this isn't ideal, and isn't the officially recommended way to do things, and shouldn't be looked on as a standard way of doing things. But please just trust me, though, that we've taken time to think things through, and for our particular use case this is the best option.)
For our purposes, we can assume that nothing has been done with cout/cerr redirection within the program itself (just the typical shell-level command line redirection), so if there's C-level functionality which looks at stdout/stderr directly (rather than the std::cout and std::cerr streams proper), that would likely work too.
This is another case of the XY Problem.
What you are really trying to accomplish is:
For our end-user use cases, we want to be sure that messages indicating what error has occurred end up in both stderr and stdout (because depending on how our users have or have not set up output redirection they may or may not see it in one or the other.) We could just print things once to each (effectively what we're doing currently), but if stdout and stderr go to the same location, it would be clearer and more user friendly if we could reformat things to remove the redundant printing.
Given that objective, a cleaner mechanism would be to allow the user to specify where they would like the error messages to go to. E.g.
the-program --error-destination "stdout"
the-program --error-destination "stderr"
the-program --error-destination "stdout,stderr"
the-program --error-destination "/tmp/errro-messages.txt"
the-program --error-destination "stdout,/tmp/errro-messages.txt"
With the understanding that "stdout" and "stderr" are special destinations. Anything else is a file.
If you really want to know on linux, /proc/self/fd will reveal all! Try ls -l /proc/self/fd today...
Also, if you really want to emit an error to the console, having determined the redirects have stolen your output, then you could always send it to /dev/tty. Don't bother with /dev/stderr as that is just a link to /proc/self/fd/2! The one issue with this is if your program is detached from the console with hup or similar then you can't use /dev/tty obviously.
I am trying to replicate the ventilator/workers/sink paradigm described in the ZMQ guide. I have the same Python Ventilator, the same C++ worker as, and the same Python Sink as was described in the ZMQ examples. I want to launch the ventilator, workers, and sink from one main python script, so I created "class" wrappers around the ventilator & sink, and both of those classes subclass the Python module "multiprocessing.Process." Since the C++ is a binary, I launch it with Python's subprocess.Popen call.
The order of starting all of this up is as follows:
h = subprocess.Popen('test') # test is the name of the binary
time.sleep(1)
s = sinkObj.start()
time.sleep(1)
v = ventObj.start()
What I am finding is that no data is getting through the system when I start up the components like this. However, if I start the C++ binary in its own shell, and only start the sinkObj and ventObj from the main python script, it works fine.
I apologize in advance if this is more of a Python question than a ZMQ question, but I haven't run into issues like this w/ Python's subprocess. I have also tried using os.system() instead of the subprocess... but same issue. I put all the code on this website: https://github.com/kkarrancsu/zmqtest if anybody is curious to test it out. There is a readme on that git which tells you what the files are.
Any ideas on why this could be happening?
------------------------- UPDATE --------------------
I found that if I create a shell script which simply launches the C binary, and call that shell script w/ os.system('run_the_shell_script') it works! So this means that there is something wrong with the way that I am using subprocess.Popen(...), but can't seem to pinpoint what the issue is. I tried w/ the shell=True flag, but it still hangs with that...
It's the name of the worker binary file that causes the problem.
There two solutions:
Chang the name of the binary file test to test_new and do the same in your All.py file, and then it will work as you desire.
Substitute subprocess.Popen('./test', shell=True) for subprocess.Popen('test', shell=True).
test is Linux command. If you type the following in your shell
$ echo $PATH
You may see that . is at the last position. It means that until shell couldn't find the binary file to be executed in the directories that $PATH indicates, it will try to search for it in the current directory .
When you execute subprocess.Popen('test', shell=True), it could find it before trying the . directory and so it won't execute the workers.
As I see, the ventilator and sink bind() to ports 6557 and 6558, and the C++ app connect() to these ports. In this case, if you start the cpp app first, it will try to connect() to the endpoints, but as nothing is bound there, it will drop silently.
In ZeroMQ the basic principle is "First Bind, then Connect". So you should not connect() before you bind() something on the socket. Imagine bind() is the 'Server', and connect() is the client. You cannot connect client to non-existing server. Also, in ZeroMQ every socket can be 'Server', but you SHOULD HAVE only 1 bind()-ing socket per URL. And you can have multiple 'connect()'s.
I have a software made of two halves: one is python running on a first pc, the other is cpp running on a second one.
They communicate through the serial port (tty).
I would like to test the python side on my pc, feeding it with the proper data and see if it behaves as expected.
I started using subprocess but then came the problem: which stdin and stdout should I supply?
cStringIO does not work because there is no fileno()
PIPE doesn't work either because select.select() says there is something to read even if nothing it's actually sent
Do you have any hints? Is there a fake tty module I can use?
Ideally you should mock that out and just test the behavior, without relying too much on terminal IO. You can use mock.patch for that. Say you want to test t_read:
#mock.patch.object(stdin, 'fileno')
#mock.patch.object(stdin, 'read')
def test_your_behavior(self, mock_read, mock_fileno):
# this should make select.select return what you expect it to return
mock_fileno.return_value = 'your expected value'
# rest of the test goes here...
If you can post at least part of the code you're trying to test, I can maybe give you a better example.
I've been wondering, what is the point of clog? As near as I can tell, clog is the same as cerr but with buffering so it is more efficient. Usually stderr is the same as stdout, so clog is the same as cout. This seems pretty lame to me, so I figure I must be misunderstanding it. If I have log messages going out to the same place I have error messages going out to (perhaps something in /var/log/messages), then I probably am not writing too much out (so there isn't much lost by using non-buffered cerr). In my experience, I want my log messages up to date (not buffered) so I can help find a crash (so I don't want to be using the buffered clog). Apparently I should always be using cerr.
I'd like to be able to redirect clog inside my program. It would be useful to redirect cerr so that when I call a library routine I can control where cerr and clog go to. Can some compilers support this? I just checked DJGPP and stdout is defined as the address of a FILE struct, so it is illegal to do something like "stdout = freopen(...)".
Is it possible to redirect clog, cerr, cout, stdin, stdout, and/or stderr?
Is the only difference between clog and cerr the buffering?
How should I implement (or find) a more robust logging facility (links please)?
Is it possible to redirect clog, cerr, cout, stdin, stdout, and/or stderr?
Yes. You want the rdbuf function.
ofstream ofs("logfile");
cout.rdbuf(ofs.rdbuf());
cout << "Goes to file." << endl;
Is the only difference between clog and cerr the buffering?
As far as I know, yes.
If you're in a posix shell environment (I'm really thinking of bash), you can redirect any
file descriptor to any other file descriptor, so to redirect, you can just:
$ myprogram 2>&5
to redirect stderr to the file represented by fd=5.
Edit: on second thought, I like #Konrad Rudolph's answer about redirection better. rdbuf() is a more coherent and portable way to do it.
As for logging, well...I start with the Boost library for all things C++ that isn't in the std library. Behold: Boost Logging v2
Edit: Boost Logging is not part of the Boost Libraries; it has been reviewed, but not accepted.
Edit: 2 years later, back in May 2010, Boost did accept a logging library, now called Boost.Log.
Of course, there are alternatives:
Log4Cpp (a log4j-style API for C++)
Log4Cxx (Apache-sponsored log4j-style API)
Pantheios (defunct? last time I tried I couldn't get it to build on a recent compiler)
Google's GLog (hat-tip #SuperElectric)
There's also the Windows Event logger.
And a couple of articles that may be of use:
Logging in C++ (Dr. Dobbs)
Logging and Tracing Simplified (Sun)
Since there are several answers here about redirection, I will add this nice gem I stumbled across recently about redirection:
#include <fstream>
#include <iostream>
class redirecter
{
public:
redirecter(std::ostream & dst, std::ostream & src)
: src(src), sbuf(src.rdbuf(dst.rdbuf())) {}
~redirecter() { src.rdbuf(sbuf); }
private:
std::ostream & src;
std::streambuf * const sbuf;
};
void hello_world()
{
std::cout << "Hello, world!\n";
}
int main()
{
std::ofstream log("hello-world.log");
redirecter redirect(log, std::cout);
hello_world();
return 0;
}
It's basically a redirection class that allows you to redirect any two streams, and restore it when you're finished.
Redirections
Konrad Rudolph answer is good in regard to how to redirect the std::clog (std::wclog).
Other answers tell you about various possibilities such as using a command line redirect such as 2>output.log. With Unix you can also create a file and add another output to your commands with something like 3>output.log. In your program you then have to use fd number 3 to print the logs. You can continue to print to stdout and stderr normally. The Visual Studio IDE has a similar feature with their CDebug command, which sends its output to the IDE output window.
stderr is the same as stdout?
This is generally true, but under Unix you can setup the stderr to /dev/console which means that it goes to another tty (a.k.a. terminal). It's rarely used these days. I had it that way on IRIX. I would open a separate X-Window and see errors in it.
Also many people send error messages to /dev/null. On the command line you write:
command ...args... 2>/dev/null
syslog
One thing not mentioned, under Unix, you also have syslog().
The newest versions under Linux (and probably Mac OS/X) does a lot more than it used to. Especially, it can use the identity and some other parameters to redirect the logs to a specific file (i.e. mail.log). The syslog mechanism can be used between computers, so logs from computer A can be sent to computer B. And of course you can filter logs in various ways, especially by severity.
The syslog() is also very simple to use:
syslog(LOG_ERR, "message #%d", count++);
It offers 8 levels (or severity), a format a la printf(), and a list of arguments for the format.
Programmatically, you may tweak a few things if you first call the openlog() function. You must call it before your first call to syslog().
As mentioned by unixman83, you may want to use a macro instead. That way you can include some parameters to your messages without having to repeat them over and over again. Maybe something like this (see Variadic Macro):
// (not tested... requires msg to be a string literal)
#define LOG(lvl, msg, ...) \
syslog(lvl, msg " (in " __FILE__ ":%d)", __VA_ARGS__, __LINE__)
You may also find __func__ useful.
The redirection, filtering, etc. is done by creating configuration files. Here is an example from my snapwebsites project:
mail.err /var/log/mail/mail.err
mail.* /var/log/mail/mail.log
& stop
I install the file under /etc/rsyslog.d/ and run:
invoke-rc.d rsyslog restart
so the syslog server handles that change and saves any mail related logs to those folders.
Note: I also have to create the /var/log/mail folder and the files inside the folder to make sure it all works right (because otherwise the mail daemon may not have enough permissions.)
snaplogger (a little plug)
I've used log4cplus, which, since version 1.2.x, is quite good. I have three cons about it, though:
it requires me to completely clear everything if I want to call fork(); somehow it does not survive a fork(); call properly... (at least in the version I had it used a thread)
the configuration files (.properties) are not easy to manage in my environment where I like the administrators to make changes without modifying the original
it uses C++03 and we are now in 2019... I'd like to have at least C++11
Because of that, and especially because of point (1), I wrote my own version called snaplogger. This is not exactly a standalone project, though. I use many other projects from the snapcpp environment (it's much easier to just get snapcpp and run the bin/build-snap script or just get the binaries from launchpad.)
The advantage of using a logger such as snaplogger or log4cplus is that you generally can define any number of destinations and many other parameters (such as the severity level as offered by syslog()). The log4cplus is capable of sending its output to many different places: files, syslog, MS-Windows log system, console, a server, etc. Check out the appenders in those two projects to have an idea of the list of possibilities. The interesting factor here is that any log can be sent to all the destinations. This is useful to have a file named all.log where all your services send their logs. This allows to understand certain bugs which would not be as easy with separate log files when running many services in parallel.
Here is a simple example in a snaplogger configuration file:
[all]
type=file
lock=true
filename=/var/log/snapwebsites/all.log
[file]
lock=false
filename=/var/log/snapwebsites/firewall.log
Notice that for the all.log file I require a lock so multiple writers do not mangle the logs between each others. It's not necessary for the [file] section because I only have one process (no threads) for that one.
Both offer you a way to add your own appenders. So for example if you have a Qt application with an output window, you could write an appender to send the output of the SNAP_LOG_ERROR() calls to that window.
snaplogger also offers you a way to extend the variable support in messages (also called the format.) For example, I can insert the date using the ${date} variable. Then I can tweak it with a parameter. To only output the year, I use ${date:year}. This variable parameter support is also extensible.
snaplogger can filter the output by severity (like syslog), by a regex, and by component. We have a normal and a secure component, the default is normal. I want logs sent to the secure component to be written to secure files. This means in a sub-directory which is way more protected than the normal logs that most admins can review. When I run my HTTP services, some times I send information such as the last 3 digits of a credit card. I prefer to have those in a secure log. It could also be password related errors. Anything I deem to be a security risk in a log, really. Again, components are extensible so you can have your own.
One little point about the redirecter class. It needs to be destroyed properly, and only once. The destructor will ensure this will happen if the function it is declared in actually returns, and the object itself is never copied.
To ensure it can't be copied, provide private copy and assignment operators:
class redirecter
{
public:
redirecter(std::ostream & src, std::ostream & dst)
: src_(src), sbuf(src.rdbuf(dst.rdbuf())) {}
~redirecter() { src.rdbuf(sbuf); }
private:
std::ostream & src_;
std::streambuf * const sbuf_;
// Prevent copying.
redirecter( const redirecter& );
redirecter& operator=( const redirecter& );
};
I'm using this technique by redirecting std::clog to a log file in my main(). To ensure that main() actually returns, I place the guts of main() in a try/catch block. Then elsewhere in my program, where I might call exit(), I throw an exception instead. This returns control to main() which can then execute a return statement.
Basic Logger
#define myerr(e) {CriticalSectionLocker crit; std::cerr << e << std::endl;}
Used as myerr("ERR: " << message); or myerr("WARN: " << message << code << etc);
Is very effective.
Then do:
./programname.exe 2> ./stderr.log
perl parsestderr.pl stderr.log
or just parse stderr.log by hand
I admit this is not for extremely performance critical code. But who writes that anyway.