'Global variables' in fortran 08? - fortran

I have a fortran program that runs a series of subroutines. The first of these reads a load of data from a .txt file. All variables are defined in another file which is included with a 'include' in the program and in each sub routine. How can I best pass variables to and from the various sub routines?

Consider converting COMMON blocks to modules, then importing only the variables you actually use via use some_module, only: var1, var3, var6. For various reasons, include files are a bad idea, not least of all because when you change them, they don't trigger make to rebuild the source files that depend on them. Best to leave them behind with the other awful F77isms...

Related

Why is it recommended to use multiple files when making a C++ project?

I'm Learning C++ and coming from GML which is pretty similar in syntax. But what I don't understand is why it's recommended to break up a project into multiple files and multiple types of files.
From what I understand, cpp files are the actual program code so people use multiple to break up the functions and parts of the program into seperate files. Header files are the same code as cpp but they're used by cpp files so it's repeated code that can be referenced in multiple places. Is this correct?
To me it doesn't make as much sense because you're not gonna be swapping out files for different builds, you still have to recompile and all the files get merged into a binary (or a few if there's dlls).
So to me, wouldn't it make more sense to just have one large file? Instead of having header files that have repeated references to them in many cpp files, just have the header files code at the top of a single cpp file and create sections down the file using regions for the content of what would be many cpp files.
I've seen a few tutorials that make small games like snake using a single file and they just create sections as they move down. First initializing variables, then another section with all the functions and logic. Then a renderer then at the bottom the main function. Couldn't this be scaled up for a large project? Is it just for organization because, while I'm still learning, I feel it's more confusing to search through many files trying to track what files reference which other files and where things are happening. Vs if it was all in one file you could just scroll down and any references that are made would be to code defined above.
Is this wrong or just not common? Are there drawbacks?
If someone could shed some insight I'd appreciate it.
You can use one file if you like. Multiple files have benefits.
Yes, you do swap different files in and out for different builds. Or, at least, some people do. This is a common technique for making projects that run on multiple platforms.
It is hard to navigate a very large file. Projects can have 100,000 lines of code, or 2,000,000 lines of code, and if you put everything in one file, it may be extremely difficult to edit. Your text editor may slow to a crawl, or it may even be unable to load the entire file into memory.
It is faster to build a project incrementally. C++ suffers from relatively long build times, on average, for typical projects. If you have multiple files, you can build incrementally. This is often faster, since you only have to recompile the files that changed and their dependencies.
If your project is extremely large, and you put everything in one file, it’s possible that the compiler will run out of memory compiling it.
You can make unnamed namespaces and static variables / static functions, which cannot be called from other files. This lets you write code which is private to one file, and prevents you from accidentally calling it or accessing the variables from other files.
Multiple team members can each work on different files, and you will not get merge conflicts when you both push your changes to a shared repository. If you both work on the same file, you are more likely to get merge conflicts.
I feel it's more confusing to search through many files trying to track what files reference which other files and where things are happening.
If you have a good editor or IDE, you can just click on a function call or press F12 (or some other key) and it will jump to the definition of that function (or type).

How can I convert a big C project to VC++

I'm manually re-writing the code.
I have a big C program with 50+ .c files and 20+ .h files
I need to convert them to a class so I can run multiple instances in a single exe
I have no experience of converting C project to C++.Is there a guidance to follow?
I have done some small research with Google and have following plan:
mv c to cpp and compile, fix all implicit converation to explict converation
remove all static keyword(for file scope), resolve global name conflicts
create a global h file for class hearder (class FOO), move all functions and variables defination to the class as members
for the macros and consts defined in other h files, include with extern "C" keyword
rename all function in cpp files to FOO::function
It sounds like your plan is to converting 50 files into a single C++ class and instantiate multiple instances of the class. This is, at best, a severe misuse of C++ classes. In practice, it's unlikely that this will work because you will still only have one thread of execution. Only one of those objects can run at a time. Every time you do an I/O operation (for example), everything comes to a halt until the I/O operation finishes.
I know nothing about your particular case, but in general, if I were approaching this problem, I would keep my existing code run multiple processes instead. I'd also write a shim class that managed communication with those processes using Inter-Process Communication (IPC), such as UNIX sockets or named pipes.
If you still feel that multiple instance is the way to go, then carve out a tiny fraction of your current source code and port it over so that you can understand the issues.

Deciding between datastore and global variables

Description
For a hybrid C/C++ (C++ wrapped in extern "C") application I'm writing I am trying to decide if it is better for me to include some static definitions my program needs to run as global variables or in an external datastore that needs to be read in every time it is run (i.e. a .csv file or sql database).
My static definitions, when laid out in a csv file, take appx 15 columns each with a maximum of 40 definitions (less than that to begin, but up to 40 due to feature scaling).
Problem
I'm torn because it feels wrong to me to include so much data as global variables that get loaded with the program at compile time. However, the overhead of reading from a datastore every time I run the program after compiling seems unnecessary.
Question
What are the best practices here? My code needs to be portable enough for someone else to understand and I do not want to obfuscate it.
It might be appropriate to generate a seperate C file from the CSV using a high level language, e.g. Python. Then either #include the generated file (only if used in a single module using static), or as a seperate compilation unit.
That way you can easily change the values in the CSV/spreadsheet program of your choice, while still having all data available. The code generating program can be called by the build system, so no manual fiddling.
For speed reasons and code clarity staticly define these variables. Lay out your definitions nicely and comment generously to help future viewers of your code. In a file, you can't comment to inform future editors what everything is. It's just slower.
The very best practice is to implement both options with flexibility to switch between implementations based on memory, computation speeds, and other load conditions.
If the application will be run server-side with generous allocation for memory/cpu, then design to those conditions. Why "feel wrong" as you put it?
Your end goal is not clearly defined. So obfuscation is not an issue yet. Obfuscation comes when you willfully redirect your coding to hide your tracks. But making a complete solution, if that is what is needed, is not obfuscation.

fortran modules -- finding where variables are defined/assigned

I am trying to extract a portion of a large fortran to make it its own program. A particular subroutine imports many modules (only two shown here as an example):
subroutine myroutine(aa,bb)
use xx_module
use yy_module
...
end subroutine myroutine
There are a lot of variables introduced in the ... portion that are imported from these modules. Is there a good way (or good tools) to find out which variables come from which module, and so on? Or I have to look through each module to see where each is defined, and then assigned (which may possibly occur in a different module...)?
On a UNIX/Linux system:
grep -ni "variable" filenames
is what I commonly do from a command line. Here, variable is the name of the variable you are looking for, filenames is name of the file (or more files) that you are searching through. This should give you insight right away about what variables come from what module. You can take on detective work from there. When in doubt, type "man grep".
SciTools Understand does, amongst many others, just that sort of thing.
Double click on a variable, takes you to the definition. Then search through
occurances.
In case you use eclipse, there is Photran, a plugin for working with Fortran projects. I don't use it myself, so I'm not 100 % sure, but I think it should be able to do what you require.

Is it better to define global (extern) variables in a single header, or in their respective header files?

I'm working on a small software project which I hope to release in the future as open-source, so I was hoping to gather opinions on what the best currently accepted practices are in regards to this issue.
The application itself is procedural, not object oriented (there is no need for me to encapsulate the rendering functions or event handling functions in a class), but some aspects of the application are heavily object oriented (like the scripting console, which heavily relies on OO). The OO aspects of the code have the standard object.cpp and object.h files.
For the procedural part, I have my code split up into various files (e.g. main.cpp, render.cpp, events.cpp), each which might have some global variables specific to that file. I also have corresponding header files for each, defining all functions and variables (as extern) that I want to be accessible from other files. Then, I just #include the right header when I need access to that function/variable from another source file.
I realized today that I could also have another option: create a single globals.h header file, where I could define all global variables (as extern again) and functions that would be needed outside of a specific source file. Then, I could just #include this file in all of the source files (instead of each individual header file like I do now). Also, using this method, if I needed to promote a variable/function to global (instead of local), I could just add the entry to the header file.
The Question: Is it a better practice to use a corresponding header file for every single .cpp file (and define the variables/functions I want globally accessible in those headers), or use a single header file to declare all globally accessible variables/functions?
Another quick update, most (but not all) of the globals are used as such because my application is multithreaded.
To me it is way better to have a header file corresponding to each implementation (c or cpp) file. You must think your classes, structures and functions as modules, and if you split your implementation, it is logical that you split your delarations too.
Another thing is that when you modify a header file, it leads all files that include it to be recompiled at build time. And at the end I can tell you it can take long. You can avoid rebuilding everything by properly splitting your declarations.
I would recommend having more headers and put less in them. You have to have the litany of includes, but that is simple to understand and edit if its wrong.
Having one big globals is harder to cope with if something goes wacky. If you did have to change something, that change is potentially far reaching and high risk.
More code isn't a bad thing in this case.
A minor point is that your compile times will increase super linearly the more you put in that one big header since each and every file has to process it. On an embedded project it is probably less of a worry, but in general having a lot in headers will start to weigh you down.
It's better to put them all in one file and not compile that file at all. If you have global variables you should be rethinking your design, especially if you're doing applications programming and not low-level systems programming.
As I've said in the comments below the question, the first thing to do would be to try and eliminate all global data. If this is not possible, rather than one big header, or throwing externs into each class' header, I'd follow a third approach.
Say your Event class needs to have a global instance. If you declare the global instance in event.cpp and extern it in event.hpp, then this essentially makes these files non-reusable anywhere else. Throwing it into a globals.cpp and globals.hpp is not ideal either because every time that global header gets modified, chances are your entire project will be rebuilt because the header is being included by everyone.
So the third option is to create an accompanying header and source file for each class that needs to have a global instance. So you'd declare the Event global instance in event_g.cpp and extern it in event_g.hpp.
Yes, it is ugly, and yes, it is tedious. But there's nothing pretty about global data to being with.