I'd like to implement a verbosity level flag in a Fortran program in the following way. The code would use statements
write (level1, *) 'This will be shown always'
write (level2, *) 'This will be shown sometimes'
and the streams level1, level2 and higher would either be equal to output_unit or correspond to /dev/null (on Unix), depending on the value of the verbosity flag provided by the user.
However, /dev/null is not platform independent. I could try to detect Windows manually and work with NUL there, but I don't want to write platform-specific code. Is there a platform-independent way of writing to an output sink with write in Fortran?
I've made my earlier comment into an answer so we can tick this question off ...
Fortran doesn't provide a platform independent way to send output into the void. If I wanted the facility I might write a little platform-dependent code, and wrap it into a module so I never had to look at it again.
Let's say that another program is to write or read a specific file. When it does that, I need to be able to take that read or write and handle the way I like (for example, program X wants to read a file located at /path/file.txt, but program Y (my program) takes that read "request" and instead gives program X the encryption first 2KiB of another file located at /path/file2). Essentially, any time a specified file is being read or written to, my program will be called and it will handle the read or write request in Dlang or C++. I cannot create a new file system for this :( and it has to at least work with Linux (so anything specific to Linux works). Also, it is crucial that I RESPOND to the read or write and not preprocess the result, sorry this was not clear in the example.
What you need is a seek-able FIFO (named pipe). It has been proposed but as far as I know it has not been implemented yet (check Linux kernel changelogs, maybe it has but I do not know about it).
As suggested, a new, tiny filesystem is your best option. Luckily it is pretty simple to write one with the FUSE project.
Is there a way to execute a callback function, or otherwise call a predefined function whenever data is written to a standard stream such as stderr or stdout? Ideally, this could be used to allow an application to output as normal with printf in the case of stdout or fprintf for other FILE streams, and conditionally perform additional tasks such as assert dependent on current settings. This could have the benefit of automatically triggering this error handling code when other libraries output to the stream.
I know that output to stderr and stdout can be redirected to other FILE handles using std::freopen. Is it practical to implement an alternate FILE stream which provides this behavior, or would that require re-implementing a great deal of standard library functions?
Standards-compliant C++ suggestions (including C++11) would be preferred, although I would be open to Windows only solutions if necessary.
I've since had an attempt at implementing a streambuffer as suggested by doomster, with a little help from Filtering Streambufs by James Kanze. Unless any other suggestions come up, it seems like this is the closest you can get to the original suggestion. It won't intercept C-style output, which I suspect that is either impossible or at impractical, but it otherwise provides all the functionality I wanted.
You can replace the streambuffer in order to intercept any output to those streams, see the rdbuf() function. I think you only need to implement the overflow() function in the streambuffer class.
I just learned of the existence of the ios_base::sync_with_stdio function, which basically allows you to turn off (or on if you already turned it off) the synchronization between iostream streams that are used in C++ and the cstdio streams that are part of Standard C.
Now, I always thought that stdout, stderr and stdin in C were essentially wrapped in a set of objects in C++ in the iostreams classes. But if they have to be synchronized with each other, this would indicate that C++'s iostream classes are not a wrapper around C's stdin etc.
I'm quite confused by this? Can someone clarify how C++'s iostream and C's stdio are different things that do exactly the same thing, just at a different level of abstraction? I thought they were the same thing!?
How it is that they have to be synchronized? I always thought they were the same thing, one wrapping the other, essentially.
The C and C++ standards make no requirements on how things are implemented, just on what the effect of certain operations is. For the <stdio> vs. <iostream> functionality this means that one could wrap the other, both could be essentially the same, or that they are either entirely independent. Technically, using a common implementation would be ideal for several reasons (e.g. there would be no need for explicit synchronization and there would be a defined mechanism to extend FILE* for user defined systems) but I'm not aware of any system which actually does this. Having one implementation be a wrapper of the other is possible and implementing <iostream>s in terms of <stdio> was a typical implementation choice although it has the drawback that it introduces an extra cost for certain operations and most C++ standard libraries have moved on to use entirely separate implementations.
Unfortunately, both the wrapped and the independent implementation share a common problem: I/O is hideously inefficient when done one character level. Thus, it is essentially mandatory to buffer characters and read from or write to a buffer. This works nicely for streams which are independent of each other. The catch are the standard C streams stdin, stdout, stderr and their C++ narrow character counterparts std::cin, std::cout, std::cerr/std::clog and C++ wide character counterparts std::wcin, std::wcout, std::wcerr/std::wclog, respectively: what happens when a user reads both from stdin and std::cin? If either of these stream read a buffer of characters from the underlying OS stream the reads would appear out of order. Similarly, if both stdout and std::cout used independent buffers characters would appear in unexpected order when a user writes both to both streams. As a result, there are special rules on the standard C++ stream objects (i.e. std::cin, std::cout, std::cerr, and std::clog and their wide character counterparts) which mandate that they synchronize with their respective <stdio> counterpart. Effectively, this means that specifically these C++ objects either use a common implementation directly or that they are implemented in terms of <stdio> and don't buffer any characters.
It was realized that the cost of this synchronization is quite substantial if the implementations don't share a common base and may be unnecessary for some users: if a user only uses <iostream> he doesn't want to pay for the extra indirection and, more importantly, he doesn't want to pay for the extra costs imposed by not using a buffer. For careful implementations the cost of not using a buffer can be quite substantial because it means that certain operations end up having to do a check and possibly a virtual function call in each iteration rather than only once in a while. Thus, std::sync_with_stdio() can be used to turn this synchronization off which may mean that the standard stream objects change their internal implementation more or less entirely. Since the stream buffers of the standard stream objects can be replaced by a user, unfortunately, the stream buffers can't be replaced but the internal implementation of the stream buffer can be changed.
In good implementations of the <iostream> library all this only affects the standard stream objects. That is, file streams should be entirely unaffected by this. However, if you want to use the standard stream objects and want to achieve good performance you clearly don't want to mix <stdio> and <iostream> and you want to turn synchronization off. Especially, when comparing I/O performance between <stdio> and <iostream> you should be aware of this.
Actually the stdout, stderr and stdin are the file handlers of OS. And FILE structure of C as well as iostream classes of C++ are both wrappers of those file handlers. Both iostream classes and FILE structure may have their own buffers or something else that needs to be synchronized between each other to make sure that input from file or output to the file is done correctly.
Okay, here's what I've found.
Actually, the I/O is ultimately performed by native system calls and functions.
Now, take Microsoft Windows for example. There are actually available handles for STDIN , STDIO etc (see here). So basically, both the C++ iostream and C stdio call native system functions, the C++ iostream does not wrap C's I/O functions (in modern implementations). It calls the native system methods directly.
Also, I found this:
Once stdin, stdout, and stderr are redirected, standard C functions such as printf() and gets() can be used, without change, to communicate with the Win32 console. But what about C++ I/O streams? Since cin, cout, cerr, and clog are closely tied to C’s stdin, stdout, and stderr, you would expect them to behave similarly. This is half right.
C++ I/O streams actually come in two flavors: template and non- template. The older non-template version of I/O streams is slowly being replaced by a newer template style of streams first defined by the Standard Template Library (STL) and which are now being absorbed into the ANSI C++ standard. Visual C++ v5 provides both types and allows you to choose between the two by including different header files. STL I/O streams work as you would expect, automatically using any newly redirected stdio handles. Non-template I/O streams, however, do not work as expected. To discover why, I looked at the source code, conveniently provided on the Visual C++ CD-ROM.
The problem is that the older I/O streams were designed to use UNIX-style "file descriptors," where integers are used instead of handles (0 for stdin, 1 for stdout, and so on). That’s convenient for UNIX implementations, but Win32 C compilers have to provide yet another I/O layer to represent that style of I/O, since Win32 does not provide a compatible set of functions. In any case, when you call _open_osfhandle() to associate a new Win32 handle with (for example) stdout, it has no effect on the other layer of I/O code. Hence, file descriptor 1 will continue using the same underlying Win32 handle as before, and sending output to cout will not produce the desired effect.
Fortunately, the designers of the original I/O stream package foresaw this problem and provided a clean and useful solution. The base class ios provides a static function, sync_with_stdio(), that causes the library to change its underlying file descriptors to reflect any changes in the standard I/O layer. Though this is not strictly necessary for STL I/O streams, it does no harm and lets me write code that works correctly with either the new or old form of I/O streams.
(source)
Hence calling sync_with_stdio() actually changes the underlying file descriptors. It was in fact added by the designers to ensure compatibility of the older C++ I/O with systems like Windows-32 which used handles instead of integers.
Note that Using sync_with_stdio() is not necessary with modern C++ template-based STL I/O.
They are the same thing, but they might also be buffered separarately. This could affect code that mixes the use of C and C++ I/O, like this
std::cout << "Hello ";
printf("%s", "world");
std::cout << "!\n";
For this to work, the underlying streams must be synchronized somehow. On some systems, this might mean that performance could suffer.
So, the standard allows you to call std::sync_with_stdio(false) to say that you don't care about code like this, but would prefer to have the standard streams work as fast as possible if it makes a difference. On many systems it doesn't make a difference.
One can be a wrapper around the other (and that works both ways. You could implement stdio functions by using iostream and vice versa. Or you can write them completely independently.
And sync_with_stdio guarantees that the two streams will be synchronized if it's enabled. But they can still synchronize when it's disabled too, if the really want to.
But even if one is a wrapper around the other, one might still have a buffer that the other doesn't share, for example, so that synchronization is still necessary.
I know many have asked this question before, but as far as I can see, there's no clear answer that helps C++ beginners. So, here's my question (or request if you like),
Say I'm writing a C++ code using Xcode or any text editor, and I want to use some of the tools provided in another C++ program. For instance, an executable. So, how can I call that executable file in my code?
Also, can I exploit other functions/objects/classes provided in a C++ program and use them in my C++ code via this calling technique? Or is it just executables that I can call?
I hope someone could provide a clear answer that beginners can absorb.. :p
So, how can I call that executable file in my code?
The easiest way is to use system(). For example, if the executable is called tool, then:
system( "tool" );
However, there are a lot of caveats with this technique. This call just asks the operating system to do something, but each operating system can understand or answer the same command differently.
For example:
system( "pause" );
...will work in Windows, stopping the exectuion, but not in other operating systems. Also, the rules regarding spaces inside the path to the file are different. Finally, even the separator bar can be different ('\' for windows only).
And can I also exploit other functions/objects/classes... from a c++
and use them in my c++ code via this calling technique?
Not really. If you want to use clases or functions created by others, you will have to get the source code for them and compile them with your program. This is probably one of the easiest ways to do it, provided that source code is small enough.
Many times, people creates libraries, which are collections of useful classes and/or functions. If the library is distributed in binary form, then you'll need the dll file (or equivalent for other OS's), and a header file describing the classes and functions provided y the library. This is a rich source of frustration for C++ programmers, since even libraries created with different compilers in the same operating system are potentially incompatible. That's why many times libraries are distributed in source code form, with a list of instructions (a makefile or even worse) to obtain a binary version in a single file, and a header file, as described before.
This is because the C++ standard does not the low level stuff that happens inside a compiler. There are lots of implementation details that were freely left for compiler vendors to do as they wanted, possibly trying to achieve better performance. This unfortunately means that it is difficult to distribute a simple library.
You can call another program easily - this will start an entirely separate copy of the program. See the system() or exec() family of calls.
This is common in unix where there are lots of small programs which take an input stream of text, do something and write the output to the next program. Using these you could sort or search a set of data without having to write any more code.
On windows it's easy to start the default application for a file automatically, so you could write a pdf file and start the default app for viewing a PDF. What is harder on Windows is to control a separate giu program - unless the program has deliberately written to allow remote control (eg with com/ole on windows) then you can't control anything the user does in that program.