Why is it required to put a name after the PROGRAM keyword when programming in Fortran? Does it make a difference? Did it have some use in the past? I can't think of any effect it has on the rest of the code other than that the name is now reserved for the main program and can't be used for any other variable or procedure.
It works the same as pascal, to provide a module name for some operating systems and environments which need an explicit job name. Examples include KRONOS, OS/360, RSX-11, and GCOS. Three of those run on iron dinosaurs. RSX-11 may have been partially designed to appeal to iron dinosaur programmers, but I notice that stuff was dropped by VAX/VMS.
Otherwise, the program name is all but useless. Maybe there are some compilation error messages which use it.
It may be useful to have a program name to easily distinguish different programs, if for nothing else. But note that PROGRAM statement is not necessary in a Fortran program at all. The only mandatory statement, which also makes the shortest Fortran program possible (although not particularly useful), is:
END
Related
I figured one of the best ways I could learn and improve in programming is to look at various source codes. I was looking at Blender's source code and noticed something about the header files. Most of them used #ifndef include guards, where macros were surrounded by double underscores (e.g. __BMESH_CLASS_H__).
It got me thinking, the whole "just don't make anything that starts with underscores at all" advice is good for beginners, but I would think that in order to progress further in programming I should learn when creating my own reserved identifiers is and isn't appropriate.
I would think that in order to progress further in programming I should learn when creating my own reserved identifiers is and isn't appropriate.
Reserved identifiers are reserved for the implementation, which means roughly the compiler, its runtime library, and maybe parts of the operating system.
So, it's appropriate to create your own when your progression has led to you writing your own compiler or operating system. That's pretty much it.
There is only one case I know of where reserved identifiers are allowed to be self-made. All others are considered undefined behavior, and while they will still most likely work completely the same, it's against the standard and shouldn't be done.
That said, reserved identifiers can be made by you when interacting with certain components of your development environment. For example, some compilers may support something like __FILE_NAME__ and others may not, and it may even vary from compiler version to compiler version. If you are making it yourself for, say, backwards compatibility (i.e. adding a preprocessor definition to define said macro), then it is 100% okay, so long its implementation follows the requirements of what is expected of using said identifier exactly (e.g. it should substitute to the file name for __FILE_NAME__, not something else).
I have a long-standing C library used by C and C++ programs.
It has a few compilation units (in other words, C++ source files) that are totally C++, but this has never been a problem. My understanding is that linkers (at least, Linux, Windows, etc.) always work at the file-by-file level so that an object file in a library that isn't referred to, doesn't have any effect on the linking, isn't put in the binary, and so on. The C users of the library never refer to the C++ symbols, and the library doesn't internally, so the resulting linked app is C-only. So while it's always worked perfectly, I don't know if it's because the C++ doesn't make it past the linking stage, or because more deeply, this kind of mixing would always work even if I did mix languages.
For the first time I'm thinking of adding some C++ code to the existing C API's implementation.
For purposes of discussion let us say I have a C function that does something, and logs it via stdout, and since this is buffered separately from cout, the output can become confusing. So let us say this module has an option that can be set to log to cout instead of stdout. (This is a more general question, not merely about getting cout and stdout to cooperate.) The C++ code might or might not run, but the dependencies will definitely be there.
In what way would this impact users of this library? It is widely used so I cannot check with the entire user base, and as it's used for mission-critical apps it'd be unacceptable to make a change that makes links start failing, at least unless I supply a release note explaining the problem and solution.
Just as an example of a possible problem, I know compilers have "hidden" libraries of support functions that are necessary for C and C++ programs. There are obviously also the Standard C and C++ libraries, that normally you don't have to explicitly link to. My concerns are that the compiler might not know to do these things.
Say I have C++ project which has been working for years well.
Say also this project might (need to verify) contain undefined behaviour.
So maybe compiler was kind to us and doesn't make program misbehave even though there is UB.
Now imagine I want to add some features to the project. e.g. add Crypto ++ library to it.
But the actual code I add to it say from Crypto++ is legitimate.
Here I read:
Your code, if part of a larger project, could conditionally call some
3rd party code (say, a shell extension that previews an image type in
a file open dialog) that changes the state of some flags (floating
point precision, locale, integer overflow flags, division by zero
behavior, etc). Your code, which worked fine before, now exhibits
completely different behavior.
But I can't gauge exactly what author means. Does he say even by adding say Crypto ++ library to my project, despite the code from Crypto++ I add is legitimate, my project can suddenly start working incorrectly?
Is this realistic?
Any links which can confirm this?
It is hard for me to explain to people involved that just adding library might increase risks. Maybe someone can help me formulate how to explain this?
When source code invokes undefined behaviour, it means that the standard gives no guarantee on what could happen. It can work perfectly in one compilation run, but simply compiling it again with a newer version of the compiler or of a library could make it break. Or changing the optimisation level on the compiler can have same effect.
A common example for that is reading one element past end of an array. Suppose you expect it to be null and by chance next memory location contains a 0 on normal conditions (say it is an error flag). It will work without problem. But suppose now that on another compilation run after changing something totally unrelated, the memory organization is slightly changed and next memory location after the array is no longer that flag (that kept a constant value) but a variable taking other values. You program will break and will be hard to debug, because if that variable is used as a pointer, you could overwrite memory on random places.
TL/DR: If one version works but you suspect UB in it, the only correct way is to consistently remove all possible UB from the code before any change. Alternatively, you can keep the working version untouched, but beware, you could have to change it later...
Over the years, C has mutated into a weird hybrid of a low-level language and a high-level language, where code provides a low-level description of a way of performing a task, and modern compilers then try to convert that into a high-level description of what the task is and then implement efficient code to perform that task (possibly in a way very different from what was specified). In order to facilitate the translation from the low-level sequence of steps into the higher-level description of the operations being performed, the compiler needs to make certain assumptions about the conditions under which those low-level steps will be performed. If those assumptions do not hold, the compiler may generate code which malfunctions in very weird and bizarre ways.
Complicating the situation is the fact that there are many common programming constructs which might be legal if certain parts of the rules were a little better thought-out, but which as the rules are written would authorize compilers to do anything they want. Identifying all the places where code does things which arguably should be legal, and which have historically worked correctly 99.999% of the time, but might break for arbitrary reasons can be very difficult.
Thus, one may wish for the addition of a new library not to break anything, and most of the time one's wish might come true, but unfortunately it's very difficult to know whether any code may have lurking time bombs within it.
Lately, I've been studying on the D language. I've always been kind of confused about the runtime.
From the information I can gather about it, (which isn't a whole lot) I understand that it's sort of a, well, runtime that helps with some of D's features. Like garbage collection, it runs along with your own programs. But since D is compiled to machine code, does it really need features such as garbage collection, if our program doesn't need it?
What really confuses me is statements such as:
"You can write an operating system in D."
I know that you can't really do that because there's more to an operating system than any compiled language can give without using some assembly. But if you had a kernel that called D code, would the D runtime prevent D from running in such a bare-bones environment? Or is the D runtime simpler than that? Can it
be thought of as simply an "automatic" inclusion of sourcefile/libraries, that when compiled with your application make no more of a difference than writing that code yourself?
Maybe I'm just looking at it all wrong. But I'm sure some information on the subject could do a lot of people good.
Yes, indeed, you can implement the functions of DRuntime that the compiler expects right in your main module (or wherever), compile without a runtime, and it'll Just Work (tm).
If you just build your code without a runtime, the compiler will emit errors when it's missing a symbol that it expects to be implemented by the runtime. You can then go and look at how DRuntime implements it to see what it does, and then implement it in whatever way you prefer. This is what XOmB, the kernel written in D (language version 1, though, but same deal), does: http://xomb.net/index.php?title=Main_Page
A lot of DRuntime isn't actually used by many applications, but it's the most convenient way to include the runtime components of D into applications, so that's why it's done as a static library (hopefully a shared library in the future).
It's pretty much the same as C and C++ I expect. The language itsself compiles to native code and just runs. But there is some code that is always needed to set everything up to run your program, for example processing command line parameters.
And some more complex language facilities are better implemented by calling some standard code rather than generating the code everywhere it is used. For example throwing an exception needs to find the relevent handler function. No doubt the compiler could insert the code to do there everywhere it was used, but it's much more sensible to write the code in a library and call that. Plus there are many pre-written library functions in the standard library.
All of this taken together is the runtime.
If you write C you can use it to write an operating system because you can write the startup code yourself, you can write all the code for handing memory allocation yourself, you can write all the code for standard functions like strcat yourself instead of using the provided ones in the runtime. But you'd not want to do that for any application program.
HI,
I am normally a C programmer.
I do regularly debug C programs on unix environment using tools like gdb,dbx.
i have never done debugging of big applications of C++.
Is that much different from how we debug in C.
theoretically i am quite good in C++ but have never got a chance to debug C++ programs.
I am also not sure about what kind of technical problems we face in c++ which will lead a developer to switch on the debugger for finding out the problem.
what are the common issues we face in C++ which will make debugger to be started
what are the challenges that a c programmer might face while debugging a C++ program?
Is it difficult and complex when compared to C?
It is basically the same.
Just remember when setting break points manually you need to fully qualify the method name with both the namespace(s) and class (As a resul i someti es find it easier to use line numbers to define break points)
Don't forget that calls to destructors are invisible in the source, but you can still step into them at the end of a block.
A few minor differences:
When typing a full-qualified symbol such as foo::bar::fum(args) in the gdb shell you have to start with a single quote for gdb to recognize it and calculate completions.
As others have said, library templates expose their internals in the debugger. You can poke around in std::vector pretty easily, but poking through std::map may not be a wise way to spend your time.
The aggressive and abundant inlining common in C++ programs can make a single line of code have seemingly endless steps. Things like shared_ptr can be particularly annoying because every access to the pointer expands inline to the template internals. You never really get to used it.
If you've got a ton of overloaded symbol names, selecting which one you want from the readline completion can be unpleasant. (Which "foo" did you want? All of them? Just these two?)
GDB can be used to debug C++ as well, so if you have an understanding of how C++ works (and understand problems that can stem from the object-oriented side of things), then you shouldn't have all that much trouble (at least, not much more than you would debugging a C program). I think...
Quite a few issues really, but it also depends on the debugger you are using, its versioning etc:
Accessing individual members of templatized class is not easy
Exception handling is a problem -- i have seen debuggers doing a better job with setjmp/longjmp
Setting breakpoints with something like obj1 == obj2, where these are not POD types may not work
The good thing that I like about debuggers is that to access private/protected class members I don't have to call get routines; just [obj-name].[var-name] is good enough.
Arpan
GDB has had a rocky past with regard to debugging c++. For a while it couldn't efficiently break inside constructors/destructors.
Also stl container were netoriously difficult to inspect in gdb. std::string was painful but generally workable. std::map was so difficult, that I generally added print statements unless there was no other way.
The constructor/destructor problem has been fixed for a few years.
The stl support got fixed in gdb 7.0.
You might still have issues with boost's libraries. I at time had difficulty getting gdb to give me asses to the contents of a shared_ptr.
So I guess debugging your own C++ isn't really that difficult, it's debugging 3rd party classes and template code that could be a problem.
C++ objects might be sometimes harder to analyze. Also as data is sometimes nested in several classes (across several layers) it might take some time to "unfold" it (as already said by others in this thread). Its hard to generally say so, as it depends very much on C++ features used and programming style and complexity of the problem to analyze (actually that is language independent).
IMO: if someone finds himselfself in the need to debug very often he should reconsider his programming style.
Usually for me it is all about error handling at the end. If a program behaves unexpected your error logs should indicate enough information to reconstruct what happened at any stage.
This also gives you the benefit that you can "debug" problems offline later once your program gets shipped to end users.