how to connect a unit number to stdin and stdout? - fortran

I have overwritten the reserved UNIT NUMBER 5 and 6 to read and write files respectively. So I now cannot read stdin and write stdout.
Instead of changing all my code, I want to connect the stdin and stdout to new UNIT NUMBERS 7 and 8 respectively. How can I do that?
EDIT#1
Let me cite an example to clarify my situation.
OPEN (UNIT=5, FILE='input.txt', status='OLD')
OPEN (UNIT=6, FILE='output.txt', status='REPLACE')
...some code...
READ (5, *format1*) x, y, z
WRITE (6, *format2*) i, j ,k
The original code is not written by me and is of about 4000 lines. Since I am new to fortran77 so i don't want to modify the OPEN statement as I am worried that this will lead to more problems for me to solve. If reconnecting the stdin and stdout to a new UNIT NUMBER is possible, it will keep me away from many potential troubles.
Edit#2
Please read my question clearly. My question is how to do that. I am not concerning whether it is good to do that. If it is impossible to do that, it is also a valid answer to me and can prevent other people to repeat the same question. Thank you all.

If you are using gfortran, you may try to set the environment variables GFORTRAN_STDIN_UNIT and GFORTRAN_STDOUT_UNIT to select the unit number to pre-connect to stdin and stdout respectively.
Details can be found in this document:
gfortran.pdf

As you've discovered 5 and 6 aren't reserved at all, they're just the units which are usually pre-connected to stdin and stdout on Unix-like operating systems. I don't believe that there is a language-standard and portable way of connecting other units to those pseudo-files. I also believe that by stating my lack of belief I will draw out someone who does know how to do what you want. We'll see.
In the meantime, if you are using the Intel Fortran compiler you can redirect output sent to a file, or input drawn from a file, by setting values for the environment variables FOR_PRINT and FOR_READ. Check the compiler documentation for details.
Finally, I don't understand how you have got to the situation where you need to do what you seem to think you need to do. Surely a global search and replace with a good editor will fix the problem ? Even more confusing, to me, is that if you have opened files on units 5 and 6 surely you can just modify a few open statements ?

The following solution shows the reconnection on linux, where the pseudo-files /dev/stdin, /dev/stdout and /dev/stderr can be found. As these files are not available on Windows, this is an unportable solution.
In case, you want to develop for linux, you can simply open these files with your preferred units (preferably with named variables, as #VladimirF pointed out). Though, I'm not sure, if the following code is compatible with Fortran77 (do you actually need Fortran77?).
program test
integer :: stdin, stdout, a
! open the files
open(7, file="/dev/stdout", status="old")
open(8, file="/dev/stdin", status="old")
! read and write an integer from/to the terminal
read(7,*) a
write(8,*) a
end program

Related

Grabbing printed statements from console C++

I have two loggers in my program. One that I made inside a gui, and one that is super complicated but very well designed and prints to the console. I am trying to get the output from the nice console logger to the rendered one. I have tried everything under the sun to get this to work but I can't seem to figure it out (due to my lack of understanding of the code from the other logger(spdlog).) My conclusion is that taking the logs directly from what is printed is the best way to do this but I can't find online anyone asking how to do this. I have seen a few questions but they just post code as an answer and don't really explain what is going on. My question: Is there a way to grab printed statements from the console and what are the performance issues/complications that come with doing something like this.
For example, if i do std::cout << "hello!" << std::endl; or some printf statement, I want to be able to further down in the code be able to grab "hello!"
My conclusion is that taking the logs directly from what is printed is the best way to do this but I can't find online anyone asking how to do this.
Consoles nowadays are terminal emulators. The original terminals' output went to printers and couldn't be (easily) read back.
Application's stdout and stderr (console) streams are write-only. Moreover, in Windows and Unix/Linux you can pipe your program's (console) output (either or both stderr and stdout) into another application with | (pipe) that creates a pipe IPC between stdout of your application and stdin of another one. That IPC pipe is write-only, your application cannot possibly read back from it.
You may be able to get access to the contents of the frame buffer of Windows cmd.exe that controls its Windows console window, but that won't be the verbatim byte-exact copy of data you wrote into stdout because of the escape sequences interpreted by Windows console.
If stdout is redirected into a file you can re-open that file for reading, but there is no portable way to re-open that file.
In other words, there is no portable way to read console output back.
I have tried everything under the sun to get this to work but I can't seem to figure it out (due to my lack of understanding of the code from the other logger(spdlog).
I bet you haven't tried reading spdlog documentation, in particular logger with multi sinks. A sink is an output abstraction, which implementation can write into a file, memory or both. What you need is attach your own sink to spdlog that prints into your UI.
Derive your sink from base_sink and implement abstract member functions:
virtual void sink_it_(const details::log_msg &msg) = 0; to print into the UI, and,
virtual void flush_() = 0; to do nothing.
Then attach one object of your sink class to that spdlog.

Prevent external program from writing into a file, without error appearing in the external program

We have an external program (20 years old .exe; we don't have the source code) that reads input from a file (for example "input.txt") and writes results into another (for example "output.txt"). The program also prints some output to the console. I want to execute this program millions of times with various input files and do something with the results. I am using C++ for this.
Currently I have written a program, which
1) writes an input file,
2) executes the external program with popen(), and
3) reads the results from the console output.
However, file operations are not very fast and I would like to prevent the program from writing the output file, because it is large compared to the console output and I only need the information that is printed to the console. However, if the external program is unable to open the output file for writing, execution will fail. Is there some way to fake this, so that the external program would think it is writing a file but actually doesn't? The program still has to access hard drive to read input files. I would prefer a solution that functions under Windows XP.
Quick search in google:
http://www.softperfect.com/products/ramdisk/
RAM Disk for Windows XP, 2003, 2008, Vista, 7 and 8.
I have nothing to do with this project. This is not an AD. I remember during DOS times RAM drives were popular. It seems that they have not died out completely. You might try to use one of them.
You may not fake the writing process (to a file) but make it faster .
unix offers tmpfs for this and Windows have some RamDisk solutions.

fopen: is it good idea to leave open, or use buffer?

So I have many log files that I need to write to. They are created when program begins, and they save to file when program closes.
I was wondering if it is better to do:
fopen() at start of program, then close the files when program ends - I would just write to the files when needed. Will anything (such as other file io) be slowed down with these files being still "open" ?
OR
I save what needs to be written into a buffer, and then open file, write from buffer, close file when program ends. I imagine this would be faster?
Well, fopen(3) + fwrite(3) + fclose(3) is a buffered I/O package, so another layer of buffering on top of it might just slow things down.
In any case, go for a simple and correct program. If it seems to run slowly, profile it, and then optimize based on evidence and not guesses.
Short answer:
Big number of opened files shouldn't slow down anything
Writing to file will be buffered anyway
So you can leave those files opened, but do not forget to check the limit of opened files in your OS.
Part of the point of log files is being able to figure out what happened when/if your program runs into a problem. Quite a few people also do log file analysis in (near) real-time. Your second scenario doesn't work for either of these.
I'd start with the first approach, but with a high-enough level interface that you could switch to the second if you really needed to. I wouldn't view that switch as a major benefit of the high-level interface though -- the real benefit would normally be keeping the rest of the code a bit cleaner.
There is no good reason to buffer log messages in your program and write them out on exit. Simply write them as they're generated using fprintf. The stdio system will take care of the buffering for you. Of course this means opening the file (with fopen) from the beginning and keeping it open.
For log files, you will probably want a functional interface that flushes the data to disk after each complete message, so that if the program crashes (it has been known to happen), the log information is safe. Leaving stuff in standard I/O buffers means excavating the data from a core dump - which is less satisfactory than having the information on disk safely.
Other I/O really won't be affected by holding one - or even a few - log files open. You lose a few file descriptors, perhaps, but that is not often a serious problem. When it is a problem, you use one file descriptor for one log file - and you keep it open so you can log information. You might elect to map stderr to the log file, leaving that as the file descriptor that's in use.
It's been mentioned that the FILE* returned by fopen is already buffered. For logging, you should probably also look into using the setbuf() or setvbuf() functions to change the buffering behavior of the FILE*.
In particular, you might want to set the buffering mode to line-at-a-time, so the log file is flushed automatically after each line is written. You can also specify the size of the buffer to use.

Issuing system commands in Linux from C, C++

I know that in a DOS/Windows application, you can issue system commands from code using lines like:
system("pause");
or
system("myProgram.exe");
...from stdlib.h. Is there a similar Linux command, and if so which header file would I find it in?
Also, is this considered bad programming practice? I am considering trying to get a list of loaded kernal modules using the lsmod command. Is that a good idea or bad idea? I found some websites that seemed to view system calls (at least system("pause");) in a negative light.
system is a bad idea for several reasons:
Your program is suspended until the command finishes.
It runs the command through a shell, which means you have to worry about making sure the string you pass is safe for the shell to evaluate.
If you try to run a backgrounded command with &, it ends up being a grandchild process and gets orphaned and taken in by the init process (pid 1), and you have no way of checking its status after that.
There's no way to read the command's output back into your program.
For the first and final issues, popen is one solution, but it doesn't address the other issues. You should really use fork and exec (or posix_spawn) yourself for running any external command/program.
Not surprisingly, the command is still
system("whatever");
and the header is still stdlib.h. That header file's name means "standard library", which means it's on every standard platform that supports C.
And yes, calling system() is often a bad idea. There are usually more programmatic ways of doing things.
If you want to see how lsmod works, you can always look-up its source code and see what the major system calls are that it makes. Then use those calls yourself.
A quick Google search turns up this link, which indicates that lsmod is reading the contents of /proc/modules.
Well, lsmod does it by parsing the /proc/modules file. That would be my preferred method.
I think what you are looking for are fork and exec.

Show or capture complete program output using cmd.exe

I'm practising writing recursive functions using Visual Studio 2015 on Windows 7.
I use cout to track the progress of my code, but it shows too many results and even though I stop the program I can not see the initial result... I can only see the output from the middle.
How can I see the complete program output?
The problem is that cmd.exe (the Windows commad prompt) has a fixed-size buffer for displaying output. If your program writes a lot of output, only the last N lines will be displayed, where N is the buffer size.
You can avoid this problem in several ways:
Write to a file, instead of to std::cout. All of your output will be captured in the file, which you can read in the text editor of your choice.
Redirect the standard output to a file. Run your program as my_prog.exe > output.log and the output will be redirected to output.log.
Pipe your output to the more command, to show it one screen at a time: my_prog.exe | more
Increase the cmd.exe buffer size. If you right-click on the title bar of the command window, you can select the Properties menu option. Within the Layout tab, you'll see a section called Screen buffer size. Change the Height to a larger value, and you'll be able to capture that many lines of output. Note that this is somewhat unreliable, since you often don't know in advance of executing your program how many lines it will output. One of the other approaches, using files, is often a better solution.
Note that this isn't really a problem with your C++ program. It's entirely reasonable to be able to produce a large quantity of output on the standard output stream. The best solutions are the ones that redirect or pipe the output to a file. These operations are available on most sensible platforms (and Windows, too) and do exactly what you need without having to change your program to write to a file.
I'm not sure to understand your problem, maybe you should write the output in a file instead of the standard output ? Then you will see all your results
Run your application from the commandline and redirect the output to a file:
yourapp.exe > yourapp.log