I am using lib.exe to create a library from Fortran compiled objects (ancient F77 using Intel compiler 18).
The Fortran has duplicate common blocks of different sizes. It also has duplicate methods.
This is legacy code with a clunky sort of overloading.
For a duplicate method lib.exe seems to always take the method from the first object.
For a duplicate common block it take it from the last object in common.lib
lib.exe /OUT:target.lib pmk.obj
lib.exe target.lib common.lib
The common blocks only differ in array sizing e.g.
COMMON /CPSTKC/ ISTACK(6,200)
vs
COMMON /CPSTKC/ ISTACK(6,15)
And I need the larger one.
I can't just reverse the lib.exe order as then it takes the wrong method.
Also I don't want to touch common.lib if I can help it, but pmk.f is fair game.
How can I understand what is happening here so I can get it to behave?
Because of my status, I can't make comments, but since no one else chimed in, here are my thoughts. Do you have BLOCKDATA units(this initializes Commons) in this program? If not you could add this at the end of your main program or create separately. This could specify the storage shape/size of the arrays. Since they are not executable they can't be linked from a library so they need to be in something explicitly linked in the link step - which might make it easiest to place in the main module. You get the advantage of explicitly and before any executable statements, initialization of common data ( a common source of bugs).
BTW, the duplicate methods you mention sounds very risky and compiler/linker/version dependent. (the common aliasing less so, since I think it was specified in at least F77 stds if not later) why would you have the same method in the same code base twice?
Related
Linker question:
if I had a file. c that has no includes at all, would we still need a linker?
Although the linker is so-named because it links together multiple object files, it performs other functions as well. It may resolve addresses that were left incomplete by the compiler. It produces a program in an executable file format that the system’s program loader can read and load, and that format may differ from that of object modules. Specifics depend on the operating system and build tools.
Further, to have a complete program in one source file, you must provide not just the main routine you are familiar with from C and C++ but also the true start of the program, the entry point that the program loader starts execution at, and you must provide implementations for all functions you use in the program, such as invocations of system services via special trap or system-call instructions to read and write data.
You can create a project, which has no typical C startup code, in which case, you may not even have a main(). However, you still need a linker, because the linker creates the required executable file format for the given architecture.
It also will set the entrypoint, where the actual execution starts.
So you can omit the standard libraries, and create a binary, which is completly void of any C functions, but you still need the linker to actually make a runable binary.
The object file format, generated by the compiler, is very different to the executable file format, because it only provides all information, that is required for the linker.
Yes. The linker does more than merely link the files. Check out this resource for more info: https://en.wikibooks.org/wiki/C%2B%2B_Programming/Programming_Languages/C%2B%2B/Code/Compiler/Linker#:~:text=The%20linker%20is%20a%20program,translation%20unit%20have%20external%20linkage.
Believe it or not, multiple libraries can be referenced by default. So, even if you don't #includea resource, the compiler may have to internally link or reference something outside of the translation unit. There are also redundancies and other considerations that are "eliminated" by the compiler.
Despite its name the linker is properly a "linker/locater". It performs two functions - 1) linking object code, 2) determining where in memory the data and code elements exist.
The object code out of the compiler is not "located" even if it has no unresolved links.
Also even if you have the simplest possible valid code:
int main(){ return 0; }
with no includes, the linker will normally implicitly link the C runtime start-up, which is required to do everything necessary before running main(). That may be very little. On some target such as ARM Cortex-M you can in fact run C code directly from the reset vector so long as you don't assume static initialisation or complete library support. So it is possible to write the reset code entirely in C, but you probably still need code to initialise the vector table with the reset handler (your C start-up function) and the initial stack pointer. On Cortex-M that can be done using in-line assembler perhaps, but it is all rather cumbersome and unnecessary and does not forgo the linker.
I am working on creating a package with two new commands, say foo and bar.
For example, if foo.ado contains:
program define foo
...
rex
end
program define rex
...
end
But my other command, bar.ado, also needs to call rex. Where should I put rex?
I see the following few options:
Create a rex.ado file as well.
Create a rex.do file and include it from within both foo.ado and bar.ado using include "`c(sysdir_plus)'r/rex.do" at the bottom of each file.
Copy the code into both foo.ado and bar.ado, which seems ugly because now the code must be maintained in two places.
What is best practice for organizing subroutines that are needed by both foo and bar?
Also, should the subroutine be called rex, _rex, or something else — maybe _foobar_rex — to indicate it is actually a sub-command that foo and bar depend on to work correctly rather than a separate command intended to stand on its own?
Create a rex.ado file as well
Your question is a bit too broad. Personally, I would go with the first option to be safe, although it really depends on the structure of your project. Sometimes including rex in a single ado file may be enough. This will be the case, for example, if foo is a wrapper command. However, for most other use cases, including two commands sharing a common program, i strongly believe that you will need to have a separate ado file.
The second option is obviously unnecessary, since the first does the same thing, plus it does not have to load the program every single time you call it. The third option is probably the worst in a programming context, as it may create conflicts and will be difficult to maintain down the road.
With regards to naming conventions, I would recommend using something like _rex only if you include the program as a subroutine in an ado file. Otherwise, rex will do just fine and will also indicate that the program has a wider scope within your project. It is also better, in my opinion, to provide a more elaborate explanation about the intended use of rex using a comment at the start of the ado file, rather than trying to incorporate this in the name.
class SomeClass
{
//some members
MemberClass one_of_the_mem_;
}
I have a function foo( SomeClass *object ) within a dll, it is being called from an exe.
Problem
address of one_of_the_mem_ changes during the time the dll call is dispatched.
Details:
before the call is made (from exe):
'&(this).one_of_the_mem_' - `0x00e913d0`
after - in the dll itself :
'&(this).one_of_the_mem_' - `0x00e913dc`
The address of object remains constant. It is only the member whose address shift by c every time.
I want some pointers regarding how can I troubleshoot this problem.
Code :
Code from Exe
stat = module->init ( this,
object_a,
&object_b,
object_c,
con_dir
);
Code in DLL
Status_C ModuleClass( SomeClass *object, int index, Config *conf, const char* name)
{
_ASSERT(0); //DEBUGGING HOOK
...
Update 1:
I compared the Offsets of members following Michael's instruction and they are the same in both cases.
Update 2:
I found a way to dump the class layout and noticed the difference in size, I have to figure out why is that happening though.
linked is the question that I found to dump class layout.
Update 3:
Final Update : Solved the problem, much thanks to Michael Burr.
it turned out that one of the build was using 32 bit time, _USE_32BIT_TIME_T was defined in it and the other one was using 64 bit time. So it generated the different layout for the object, attached is the difference file.
Your DLL was probably compiled with different set of compiler options (or maybe even a slightly different header file) and the class layout is different as a result.
For example, if one was built using debug flags and other wasn't or even if different compiler versions were used. For example, the libraries used by different compiler versions might have subtle differences and if your class incorporates a type defined by the library you could have different layouts.
As a concrete example, with Microsoft's compiler iterators and containers are sensitive to release/debug, _SECURE_SCL on/off , and _HAS_ITERATOR_DEBUGGING on/off setting (at least up though VS 2008 - VS 2010 may have changed some of this to a certain extent). See http://connect.microsoft.com/VisualStudio/feedback/details/352699/secure-scl-is-broken-in-release-builds for some details.
These kinds of issues make using C++ classes across DLL boundaries a bit more fragile than using straight C interfaces. They can occur in C structures as well, but it seems like C++ libraries have these differences more often (I think that's the nature of having richer functionality).
Another layout-changing issue that occurs every now and then is having a different structure packing option in effect in the different compiles. One thing that can 'hide' this is that pragmas are often used in headers to set structure packing to a certain value, and sometimes you may come across a header that does this without changing it back to the default (or more correctly the previous setting). If you have such a header, it's easy to have it included in the build for one module, but not another.
that sounds a bit wierd, you should show more code, it should 'move' if it being passed by ref, it sounds more like a copy of it is being made and that having the member function called.
Perhaps the DLL versions is compiled against a different version that you are referencing. check and make sure the header file is for the same version as the dll.
Recompile the library if you can.
I'm working on a number crunching app using the CUDA framework. I have some static data that should be accessible to all threads, so I've put it in constant memory like this:
__device__ __constant__ CaseParams deviceCaseParams;
I use the call cudaMemcpyToSymbol to transfer these params from the host to the device:
void copyMetaData(CaseParams* caseParams)
{
cudaMemcpyToSymbol("deviceCaseParams", caseParams, sizeof(CaseParams));
}
which works.
Anyways, it seems (by trial and error, and also from reading posts on the net) that for some sick reason, the declaration of deviceCaseParams and the copy operation of it (the call to cudaMemcpyToSymbol) must be in the same file. At the moment I have these two in a .cu file, but I really want to have the parameter struct in a .cuh file so that any implementation could see it if it wants to. That means that I also have to have the copyMetaData function in the a header file, but this messes up linking (symbol already defined) since both .cpp and .cu files include this header (and thus both the MS C++ compiler and nvcc compiles it).
Does anyone have any advice on design here?
Update: See the comments
With an up-to-date CUDA (e.g. 3.2) you should be able to do the memcpy from within a different translation unit if you're looking up the symbol at runtime (i.e. by passing a string as the first arg to cudaMemcpyToSymbol as you are in your example).
Also, with Fermi-class devices you can just malloc the memory (cudaMalloc), copy to the device memory, and then pass the argument as a const pointer. The compiler will recognise if you are accessing the data uniformly across the warps and if so will use the constant cache. See the CUDA Programming Guide for more info. Note: you would need to compile with -arch=sm_20.
If you're using pre-Fermi CUDA, you will have found out by now that this problem doesn't just apply to constant memory, it applies to anything you want on the CUDA side of things. The only two ways I have found around this are to either:
Write everything CUDA in a single file (.cu), or
If you need to break out code into separate files, restrict yourself to headers which your single .cu file then includes.
If you need to share code between CUDA and C/C++, or have some common code you share between projects, option 2 is the only choice. It seems very unnatural to start with, but it solves the problem. You still get to structure your code, just not in a typically C like way. The main overhead is that every time you do a build you compile everything. The plus side of this (which I think is possibly why it works this way) is that the CUDA compiler has access to all the source code in one hit which is good for optimisation.
Imagine you'd like to write a program that tests functions in a c++ dll file.
You should enable the user to select a dll (we assume we are talking about c++ dlls).
He should be able to obtain a list of all functions exported by the dll.
Then, the user should be able to select a function name from the list, manually input a list of arguments ( the arguments are all basic types, like int, double, bool or char arrays (e.g. c-type strings) ) and attempt to run the selected function with the specified arguments.
He'd like to know if the function runs with the specified arguments, or do they cause it to crash ( because they don't match the signature for example ).
The main problem is that C++, being a strongly typed language, requires you to know the number and type of the arguments for a function call at compile time.And in my case, I simply don't know what these arguments are, until the user selects them at runtime.
The only solution I came up with, was to use assembly to manually push the arguments on the call stack.
However, I've come to understand that if I want to mess with assembly, I'd better make damn sure that I know which calling convention are the functions in the dll using.
So (finally:) here's my question: can I deduce the calling convention programmaticaly? Dependency Walker won't help me, and I've no idea how to manually read PE format.
The answer is maybe.
If the functions names are C++ decorated, then you can determine the argument count and types from the name decoration, this is your best case scenario, and fairly likely if MSVC was used to write the code in the first place.
If the exported functions are stdcall calling convention (the default for windows api), you can determine the number of bytes to be pushed, but not the types of the arguments.
The bad news is that for C calling convention, there isn't any way to tell by looking at the symbol names. You would need to have access to the source code or the debug info.
http://en.wikipedia.org/wiki/X86_calling_conventions
The name that a function is given as an export is not required to have any relationship with the name that the linker sees, but most of the time, the exported name and the symbol name that the linker sees are the same.
You didn't specify whether you're talking 32-bit or 64-bit here, and the difficulties outlined by you and the other posters mainly apply to 32-bit code. On 64-bit Windows, there's essentially only one calling convention (it's in also in the wikipedia article linked by John Knoeller), which means that you do know the calling convention (of course with the exception of anybody cooking up their own).
Also, with the Microsoft x64 calling convention, not knowing the number of parameters of the function to be called does not stop you from calling it, providing as many parameters as you wish/the user wishes to. This is because you as a caller set aside stack space and clean it up afterwards. -- Of course, not providing the right [number of] parameters may still have the called function do silly things because you're providing invalid input, but that's another story.
The compiled code does not just say 'Here this function is a fastcall, and this one here is stdcall' unfortunately.
Not even modern disassemblers like IDA try to deduce call types by default (there might be a plugin or an option somewhere idk).
Basically if you are a human you cn look at the first few instructions and tell 90% of the time. If they are pop and push, its stdcall, if its passing params through the registers (especially ecx) then its cdecl. Fastcall also uses the registers but does something special.. dunno off the top of my head. But all this info is useless because your program obviously will not be a human.
If you are doing testing, dont you at least have the header files?? This is an awfully hard way to skin a cat..
If you want to know what calling convention a C++ function uses, your best hope is to study
The header that declares that function, and
The documentation for the compiler that compiled your particular DLL.
But this whole thing sounds like a bit of a mess, honestly. Why does your friend want to be able to do this, and why can't he get the information he needs by parsing a header that declares the relevant functions?
This page describes the way VC++6 encodes parameter and calling convention info into a symbol name: http://www.bottledlight.com/docs/mangle.html
I suspect that later versions of VC++ will be compatible but I haven't confirmed this.
There are also some tools that automate this which accompany the compiler: http://msdn.microsoft.com/en-us/library/5x49w699.aspx
The name mangling only applies for C++ functions; if a function is 'extern "C"' then this won't work.