Misunderstanding concerning g++, dynamic and static linking on linux vs windows - c++

I'm a little confused by what I've learned today. I hope somebody can help me out.
I understand the concept of dynamic and static linking, but the problem is as follows. On windows, or at least the paradigm on windows, you can have a .lib (which is like .a) and .dll (which is like .so, except... different) and you must statically link in the .lib which contains code that calls functions from the dll at runtime. Is this correct? In other words, gcc or g++ must have .lib files available at compile/link time, and be able to find .dll files at runtime. Please correct any wrong assumptions here.
However, I'm splitting a few of my source files in my small application away because I want to make them a library. When I run g++ on my object files, with the -shared option, this basically creates a shared library (.so)? This is where the confusion arises. The same so file is needed both at link time and runtime? I have trouble understanding how I need it in the -L/-l option at link time but it still needs the file at runtime. Is this actually the norm? Is a dll fundamentally different?
Finally, a final question. Take a library like boost on Windows. I built boost according to the instructions. In the end, the stage/lib directory contains libraries in a repeating sequence of name.a, name.dll.a, name.dll. What is the purpose of this scheme? I know I need the dll files at runtime, but when I use the -L/-l option at link time, what files is it using THEN?
Sorry if this is really scattered, but I hope someone can help clear this up. Thanks a lot!

On windows, or at least the paradigm on windows, you can have a .lib (which is like .a) and .dll (which is like .so, except... different) and you must statically link in the .lib which contains code that calls functions from the dll at runtime. Is this correct?
Yes and no. That is one way that DLLs work on Windows, but it is not the only way.
You can load a DLL manually, using Win32 API calls. But if you do, you have to get function pointers manually to actually access the DLL. The purpose of the import library (that static library you're talking about) is to do this automatically.
The nice thing about doing it manually is that you can pick and choose what DLLs you want. This is how some applications provide plugin support. Users write a DLL that exports a well-defined set of API functions. Your program loads them from a directory, and they bundle the function pointers for each DLL into its own object, which represents the interface to that plugin.
GCC works the same way, on Windows. Building a DLL produces a DLL and an import library. It's a ".a" file instead of ".lib" because of the compiler, but it still does the same thing.
On Linux, .so files are a combination of the .dll and the import library. So you link to the .so when compiling the program in question, and it does the same job as linking to the import library.

It's just two ways of giving infos at compile time about the shared library. Maybe a comparison would explain it better ?
In Windows, it's : "You will use a shared library (.dll) and here (.a or .dll.a) is the manual on how to use it."
In Linux, it's :" You will use a shared library (.so) so look at it beforehand (.so) so you'll know how to use it."

Related

Dynamic linking - Linux Vs. Windows

Under Windows, when I compile C/C++ code in a DLL project in MSVC I am getting 2 files:
MyDll.dll
MyDll.lib
where as far as I understand MyDll.lib contains some kind of pointers table indicating functions locations in the dll. When using this dll, say in an exe file, MyDll.lib is embedded into the exe file during linkage so in runtime it "knows" where the functions are located in MyDll.dll and can use them.
But if I compile the same code under Linux I am getting only one file MySo.so without MySo.a (the equivalent to lib file in Linux) so how does an executable file under Linux knows where the functions are located in MySo.so if nothing is embedded into it during linking?
The MSVC linker can link together object files (.obj) and object libraries (.lib) to produce an .EXE or a .DLL.
To link with a DLL, the process in MSVC is to use a so-called import library (.LIB) that acts as a glue between the C function names and the DLL's export table (in a DLL a function can be exported by name or by ordinal - the latter was often used for undocumented APIs).
However, in most cases the DLL export table has all the function names and thus the import library (.LIB) contains largely redundant information ("import function ABC -> exported function ABC", etc).
It is even possible to generate a .LIB from an existing .DLL.
Linkers on other platforms don't have this "feature" and can link with dynamic libraries directly.
On Linux, the linker (not the dynamic linker) searches through the shared libraries specified at link time and creates references to them inside the executable. When the dynamic linker loads these executables it loads the shared libraries they require into memory and resolves the symbols, which allows the binaries to be run.
MySo.a, if created, would actually include the symbols to be linked directly into the binary instead of the "symbol lookup tables" used on Windows.
rustyx's answer explains the process on Windows more thoroughly than I can; it's been a long time since I've used Windows.
The difference you are seeing is more of an implementation detail - under the hood both Linux and Windows work similarly - you code calls a stub function which is statically linked in your executable and this stub then loads DLL/shlib if necessary (in case of delayed loading, otherwise library is loaded when program starts) and (on first call) resolves symbol via GetProcAddress/dlsym.
The only difference is that on Linux the these stub functions (which are called PLT stubs) are generated dynamically when you link your app with dynamic library (library contains enough information to generate them), whereas on Windows they are instead generated when DLL itself is created, in a separate .lib file.
The two approaches are so similar that it's actually possible to mimic Windows import libraries on Linux (see Implib.so project).
On Linux, you pass MySo.so to the linker and it is able to extract only what is needed for the link phase, putting in a reference that MySo.so is needed at run time.
.dll or .so are shared libs (linked in runtime), while .a and .lib is a static library (linked in compile time). This is no difference between Windows and Linux.
The difference is, how are they handled. Note: the difference is only in the customs, how are they used. It wouldn't be too hard to make Linux builds on the Windows way and vice versa, except that practically no one does this.
If we use a dll, or we call a function even from our own binary, there is a simple and clear way. For example, in C, we see that:
int example(int x) {
...do_something...
}
int ret = example(42);
However, on the asm level, there could be many differences. For example, on x86, a call opcode is executed, and the 42 is given on the stack. Or in some registers. Or anywhere. No one knows that before writing the dll, how it will be used. Or how the projects will want to use it, possible written with a compiler (or in a language!) which doesn't even exist now (or is it unknown for the developers of the dll).
For example, by default, both C and Pascal puts the arguments (and gets the return values) from the stack - but they are doing it in different order. You can also exchange arguments between your functions in the registers by some - compiler-dependent - optimization.
As you see correctly, the Windows custom is that building a dll, we also create a minimal .a/.lib with it. This minimal static library is only a wrapper, the symbols (functions) of that dll are reached through it. This makes the required asm-level calling conversions.
Its advantage is the compatibility. Its disadvantage is that if you have only a .dll, you can have a hard time to figure out, how its functions want to be called. This makes the usage of dlls a hacking task, if the developer of the dll does not give you the .a. Thus, it serves mainly closedness purposes, for example so is it easier to get extra cash for the SDKs.
Its another disadvantage is than even if you use a dynamical library, you need to compile this little wrapper statically.
In Linux, the binary interface of the dlls is standard and follows the C convention. Thus, no .a is required and there is binary compatibility between the shared libs, in exchange we don't have the advantages of the microsoft custom.

about Linux c++ static library

here I have several problem for static libraries need your help.
From some books I learned that, a static library (.a in Linux) contains a set of compiled objects, when it's linked into an executable, the link tool will only take out those objects that are actually referenced.
So if the .a contains 1.o, 2.o and 3.o and my application only uses functions in 1.o, then only 1.o will be built into the executable, is this correct?
Then let's go further, say we have 2 .a libs, first contains 1.o, 2.o and 3.o and second contains 3.o 4.o and 5.o. If my application only uses functions in 1.o 2.o 3.o and 4.o then only these these 4 .o will be built into the executable, is it correct?
I raised this question because I'm building some .a files to use with MSVC. These .a libraries are built in MinGW and then should be compatible with MSVC. I could include these libs into the MSVC project and build my program successfully. But the generated executable is 5 MB (the total size of all .a should be about 8MB) even when my program is an empty one (only have an empty main function).
Does this mean, .a when used in MSVC or .lib (static libs for Win) will be built into executable as a whole, but not in the way it behaves under Linux?
and also I have a question for below content
If I could use -static to link to static version of lib tiff, then why it needs to link to other libs? Should a static lib already contains all the code it needs?
Thanks
A static library won't have all code it needs. Imagine a library that uses printf() to output error messages. This library will still depend on the static version of libc, it won't include the code for printf itself.
In your case, since Tiff supports various internal representation formats, one of which is jpeg, the static libtiff wants you to link the static libjpeg.
There is no fundamental difference in this between windows and linux. When you do a static link against libtiff and libjpeg, only the libjpeg functions that are actually needed by libtiff get linked, but not, for example, the parts that handle the JFIF Jpeg wrapper.
EDIT - answer to comments
There's a lot of stuff going on before your main() gets called. It's not that much on linux/unix, where the OS delivers arguments the way main() wants them, but on Windows, a different function, normally called WinMain() gets control when the program starts. This WinMain() is hidden in the library. It gets the whole command line in one string and has to parse arguments to pass them to main(), which means checking for spaces, which is probably implemented using isspace(), which pulls in ctype, which pulls in lots of locale-dependent stuff, and so on. So much of your 5 MB might be code that serves to make your program work in windows just like it would in unix. Also, if you're compiling with debug options on, these debug symbols take a lot of space as well.
Keltar's comment about a static library calling a dynamic one is right as well - but that adds a complication you don't need very often. There's more or less 2 reasons for static linkage:
You want your program be able to run even when something goes wrong with dynamic libraries. If i accidentially rm /lib/libc.so.* i will be glad if i have statically linked versions of mount and cp to copy it from somewhere else. Thus, installers and "emergency programs" are often linked static
You want to make sure your program uses a specific version of libraries, that was current on your system when you compiled it, not the dynalink version that might be installed on some system 5 years later.
Both these reasons don't make much sense if some but not all libraries are static.
There's exceptions though: imagine you need a specific feature in libtiff. You browse the documentation and it says nothing about the feature. You check the source code and find that it implements specific_feature(), with a big "This is experimental and might go away in a future version" comment. If you decide you need that feature now, you might want to link libtiff statically to protect your program from failing when future versions of libtiff don't implement the function any more. Of course, you'll still want the dynamic versions of libjpeg and libc. I'll leave the decision if this is good practise to you.
Windows is a special case as it always uses kernel.dll and user32.dll as there are no static versions, even if the rest of your program is linked statically.
So while a libtiff.a could require a libjpeg.so, and a libtiff.lib could require a libjpeg.dll, there's generally not much reason to do so.

Difference between a shared object and a dll

I have a library which at compile time is building a shared object, called libEXAMPLE.so (in the so.le folder), and a dll by the name of EXAMPLE.so (in the dll folder). The two shared objects are quite similar in size and appear to be exactly the same thing. Scouring the internet revealed that there might be a difference in the way programs use the dll to do symbol resolution vs the way it is done with the shared object.
Can you guys please help me out in understanding this?
"DLL" is how windows like to name their dynamic library
"SO" is how linux like to name their dynamic library
Both have same purpose: to be loaded dynamically.
Windows uses PE binary format and linux uses ELF.
PE:
http://en.wikipedia.org/wiki/Portable_Executable
ELF:
http://en.wikipedia.org/wiki/Executable_and_Linkable_Format
I suppose a Linux OS.
In Linux, static libraries (.a, also called archives) are used for linking at compile time while shared objects (.so) are used both for linking at load time and at run time.
In your case, it seems that for some reason the library differentiate the files for linking at load time (libEXAMPLE.so) and linking at run time (EXAMPLE.so) even though those 2 files are exactly the same.

How does the Import Library work? Details?

I know this may seem quite basic to geeks. But I want to make it crystal clear.
When I want to use a Win32 DLL, usually I just call the APIs like LoadLibrary() and GetProcAdderss(). But recently, I am developing with DirectX9, and I need to add d3d9.lib, d3dx9.lib, etc files.
I have heard enough that LIB is for static linking and DLL is for dynamic linking.
So my current understanding is that LIB contains the implementation of the methods and is statically linked at link time as part of the final EXE file. While DLL is dynamic loaded at runtime and is not part of the final EXE file.
But sometimes, there're some LIB files coming with the DLL files, so:
What are these LIB files for?
How do they achieve what they are meant for?
Is there any tools that can let me inspect the internals of these LIB files?
Update 1
After checking wikipedia, I remember that these LIB files are called import library.
But I am wondering how it works with my main application and the DLLs to be dynamically loaded.
Update 2
Just as RBerteig said, there're some stub code in the LIB files born with the DLLs. So the calling sequence should be like this:
My main application --> stub in the LIB --> real target DLL
So what information should be contained in these LIBs? I could think of the following:
The LIB file should contain the fullpath of the corresponding DLL; So the DLL could be loaded by the runtime.
The relative address (or file offset?) of each DLL export method's entry point should be encoded in the stub; So correct jumps/method calls could be made.
Am I right on this? Is there something more?
BTW: Is there any tool that can inspect an import library? If I can see it, there'll be no more doubts.
Linking to a DLL file can occur implicitly at compile link time, or explicitly at run time. Either way, the DLL ends up loaded into the processes memory space, and all of its exported entry points are available to the application.
If used explicitly at run time, you use LoadLibrary() and GetProcAddress() to manually load the DLL and get pointers to the functions you need to call.
If linked implicitly when the program is built, then stubs for each DLL export used by the program get linked in to the program from an import library, and those stubs get updated as the EXE and the DLL are loaded when the process launches. (Yes, I've simplified more than a little here...)
Those stubs need to come from somewhere, and in the Microsoft tool chain they come from a special form of .LIB file called an import library. The required .LIB is usually built at the same time as the DLL, and contains a stub for each function exported from the DLL.
Confusingly, a static version of the same library would also be shipped as a .LIB file. There is no trivial way to tell them apart, except that LIBs that are import libraries for DLLs will usually be smaller (often much smaller) than the matching static LIB would be.
If you use the GCC toolchain, incidentally, you don't actually need import libraries to match your DLLs. The version of the Gnu linker ported to Windows understands DLLs directly, and can synthesize most any required stubs on the fly.
Update
If you just can't resist knowing where all the nuts and bolts really are and what is really going on, there is always something at MSDN to help. Matt Pietrek's article An In-Depth Look into the Win32 Portable Executable File Format is a very complete overview of the format of the EXE file and how it gets loaded and run. Its even been updated to cover .NET and more since it originally appeared in MSDN Magazine ca. 2002.
Also, it can be helpful to know how to learn exactly what DLLs are used by a program. The tool for that is Dependency Walker, aka depends.exe. A version of it is included with Visual Studio, but the latest version is available from its author at http://www.dependencywalker.com/. It can identify all of the DLLs that were specified at link time (both early load and delay load) and it can also run the program and watch for any additional DLLs it loads at run time.
Update 2
I've reworded some of the earlier text to clarify it on re-reading, and to use the terms of art implicit and explicit linking for consistency with MSDN.
So, we have three ways that library functions might be made available to be used by a program. The obvious follow up question is then: "How to I choose which way?"
Static linking is how the bulk of the program itself is linked. All of your object files are listed, and get collected together in to the EXE file by the linker. Along the way, the linker takes care of minor chores like fixing up references to global symbols so that your modules can call each other's functions. Libraries can also be statically linked. The object files that make up the library are collected together by a librarian in a .LIB file which the linker searches for modules containing symbols that are needed. One effect of static linking is that only those modules from the library that are used by the program are linked to it; other modules are ignored. For instance, the traditional C math library includes many trigonometry functions. But if you link against it and use cos(), you don't end up with a copy of the code for sin() or tan() unless you also called those functions. For large libraries with a rich set of features, this selective inclusion of modules is important. On many platforms such as embedded systems, the total size of code available for use in the library can be large compared to the space available to store an executable in the device. Without selective inclusion, it would be harder to manage the details of building programs for those platforms.
However, having a copy of the same library in every program running creates a burden on a system that normally runs lots of processes. With the right kind of virtual memory system, pages of memory that have identical content need only exist once in the system, but can be used by many processes. This creates a benefit for increasing the chances that the pages containing code are likely to be identical to some page in as many other running processes as possible. But, if programs statically link to the runtime library, then each has a different mix of functions each laid out in that processes memory map at different locations, and there aren't many sharable code pages unless it is a program that all by itself is run in more than process. So the idea of a DLL gained another, major, advantage.
A DLL for a library contains all of its functions, ready for use by any client program. If many programs load that DLL, they can all share its code pages. Everybody wins. (Well, until you update a DLL with new version, but that isn't part of this story. Google DLL Hell for that side of the tale.)
So the first big choice to make when planning a new project is between dynamic and static linkage. With static linkage, you have fewer files to install, and you are immune from third parties updating a DLL you use. However, your program is larger, and it isn't quite as good citizen of the Windows ecosystem. With dynamic linkage, you have more files to install, you might have issues with a third party updating a DLL you use, but you are generally being friendlier to other processes on the system.
A big advantage of a DLL is that it can be loaded and used without recompiling or even relinking the main program. This can allow a third party library provider (think Microsoft and the C runtime, for example) to fix a bug in their library and distribute it. Once an end user installs the updated DLL, they immediately get the benefit of that bug fix in all programs that use that DLL. (Unless it breaks things. See DLL Hell.)
The other advantage comes from the distinction between implicit and explicit loading. If you go to the extra effort of explicit loading, then the DLL might not even have existed when the program was written and published. This allows for extension mechanisms that can discover and load plugins, for instance.
These .LIB import library files are used in the following project property, Linker->Input->Additional Dependencies, when building a bunch of dll's that need additional information at link time which is supplied by the import library .LIB files. In the example below to not get linker errors I need to reference to dll's A,B,C, and D through their lib files. (note for the linker to find these files you may need to include their deployment path in Linker->General->Additional Library Directories else you will get a build error about being unable to find any of the provided lib files.)
If your solution is building all dynamic libraries you may have been able to avoid this explicit dependency specification by relying instead on the reference flags exposed under the Common Properties->Framework and References dialog. These flags appear to automatically do the linking on your behalf using the *.lib files.
This however is as it says a Common Properties, which is not configuration or platform specific. If you need to support a mixed build scenario as in our application we had a build configuration to render a static build and a special configuration that built a constrained build of a subset of assemblies that were deployed as dynamic libraries. I had used the Use Library Dependency Inputs and Link Library Dependencies flags set to true under various cases to get things to build and later realizing to simplify things but when introducing my code to the static builds I introduced a ton of linker warnings and the build was incredibly slow for the static builds. I wound up introducing a bunch of these sort of warnings...
warning LNK4006: "bool __cdecl XXX::YYY() already defined in CoreLibrary.lib(JSource.obj); second definition ignored D.lib(JSource.obj)
And I wound up using the manual specification of Additional Dependencies to satisfy the linker for the dynamic builds while keeping the static builders happy by not using a common property that slowed them down. When I deploy the dynamic subset build I only deploy the dll files as these lib files are only used at link time, not at runtime.
Here are some related MSDN topics to answer my question:
Linking an Executable to a DLL
Linking Implicitly
Determining Which Linking Method to Use
Building an Import Library and Export File
There are three kinds of libraries: static, shared and dynamically loaded libraries.
The static libraries are linked with the code at the linking phase, so they are actually in the executable, unlike the shared library, which has only stubs (symbols) to look for in the shared library file, which is loaded at run time before the main function gets called.
The dynamically loaded ones are much like the shared libraries, except they are loaded when and if the need arises by the code you've written.
In my mind, there are two method to link dll to exe.
Use dll and the import library (.lib file) implicitly
Use functions like loadlibrary() explicitly

Dynamic and Static Libraries in C++

In my quest to learn C++, I have come across dynamic and static libraries.
I generally get the gist of them: compiled code to include into other programs.
However, I would like to know a few things about them:
Is writing them any different than a normal C++ program, minus the main() function?
How does the compiled program get to be a library? It's obviously not an executable, so how do I turn, say 'test.cpp' into 'test.dll'?
Once I get it to its format, how do I include it in another program?
Is there a standard place to put them, so that whatever compilers/linkers need them can find them easily?
What is the difference (technically and practically) between a dynamic and static library?
How would I use third party libraries in my code (I'm staring at .dylib and .a files for the MySql C++ Connector)
Everything I have found relating to libraries seems to be targeting those who already know how to use them. I, however, don't. (But would like to!)
Thanks!
(I should also note I'm using Mac OS X, and although would prefer to remain IDE-neutral or command-line oriented, I use QtCreator/Netbeans)
Is writing them any different than a normal C++ program, minus the main() function?
No.
How does the compiled program get to be a library? It's obviously not an executable, so how do I turn, say 'test.cpp' into 'test.dll'?
Pass the -dynamiclib flag when you're compiling. (The name of the result is still by default a.out. On Mac OS X you should name your dynamic libraries as lib***.dylib, and on Linux, lib***.so (shared objects))
Once I get it to its format, how do I include it in another program?
First, make a header file so the the other program can #include to know what functions can be used in your dylib.
Second, link to your dylib. If your dylib is named as libblah.dylib, you pass the -lblah flag to gcc.
Is there a standard place to put them, so that whatever compilers/linkers need them can find them easily?
/usr/lib or /usr/local/lib.
What is the difference (technically and practically) between a dynamic and static library?
Basically, for a static lib, the whole library is embedded into the file it "links" to.
How would I use third party libraries in my code (I'm staring at .dylib and .a files for the MySql C++ Connector)
See the 3rd answer.
Is writing them any different than a normal C++ program, minus the main() function?
Except for the obvious difference that a library provides services for other programs to use, usually (*) there isn't a difference.
* in gcc classes/functions are exported by default - this isn't the case in VC++, there you have to explicitly export using __declspec(export).
How does the compiled program get to be a library? It's obviously not an executable, so how do I turn, say 'test.cpp' into 'test.dll'?
This depends on your compiler. In Visual Studio you specify this in your project configuration. In gcc to create a static library you compile your code normally and then package it in an archive using ar. To create a shared you compile first (with the -fpic flag to enable position independent code generation, a requirement for shared libraries), then use the -shared flag on the object files. More info can be found in the man pages.
Once I get it to its format, how do I include it in another program?
Again this is a little compiler-dependant. In VS, if it's a shared library, when including the class/function you wish to use it should be marked with a __declspec(import) (this is usually done with ifdefs) and you have to specify the .lib file of the shared library for linkage. For a static library you only have to specify the .lib file (no export/import needed since the code will end up in your executable).
In gcc you only need to specify the library which you link against using -llibrary_name.
In both cases you will need to provide your client some header files with the functions/classes that are intended for public use.
Is there a standard place to put them, so that whatever compilers/linkers need them can find them easily?
If it's your own library then it's up to you. Usually you can specify the linker additional folders to look in. We have a lib folder in our source tree where all .lib (or .a/.so) files end up and we add that folder to the additional folder to look in.
If you're shipping a library on UNIX the common place is usually /usr/lib (or /usr/local/lib), this is also where gcc searches in by default.
What is the difference (technically and practically) between a dynamic and static library?
When you link a program to static libraries the code of the libraries ends up in your executable. Practically this makes your executable larger and makes it harder to update/fix a static library for obvious reasons (requires a new version of your executable).
Shared libraries are separate from your executable and are referenced by your program and (usually) loaded at runtime when needed.
It's also possible to load shared libraries without linking to them. It requires more work since you have to manually load the shared library and any symbol you wish to use. On Windows this is done using LoadLibrary/GetProcAddress and on POSIX systems using dlsym/dlopen.
How would I use third party libraries in my code?
This is usually accomplished by including the necessary header files and linking with the appropriate library.
A simple example to link with a static library foo would look like this: gcc main.cpp -o main.o -L/folder/where/foo.a/is/at -lfoo.
Most open source projects have a readme that gives more detailed instructions, I'd suggest to take a look at it if there is one.
Is writing [libraries] any different than a normal C++ program, minus the main() function?
That depends on your definition of "different." From the language's point of view, you write a file or collection of files, don't put in a main() and you tell the compiler to generate a library instead of an executable.
However, designing libraries is much harder because you have no control over the code that calls you. Libraries need to be more robust against failure than normal code. You can't necessarily delete pointers somebody passes to your function. You can't tell what macros will mess with your code. You also can't accidentally pollute the global namespace (eg., don't put using namespace std at the beginning of your header files).
How does the compiled program get to be a library? It's obviously not an executable, so how do I turn, say 'test.cpp' into 'test.dll'?
That depends on the compiler. In Visual C++ this is a project config setting. In gcc (going from memory) it's something like gcc -c foo.c -shared.
Once I get it to its format, how do I include it in another program?
That depends on your compiler and linker. You make sure the header files are available via a project setting or environment variable, and you make sure the binaries are available via a different project setting or compiler variable.
Is there a standard place to put them, so that whatever compilers/linkers need them can find them easily?
That depends on the operating system. In UNIX you're going to put things in places like /usr/lib, /usr/local/lib. On Windows people used to put DLLs in places like C:\WINDOWS but that's no longer allowed. Instead you put it in your program directory.
What is the difference (technically and practically) between a dynamic and static library?
Static libraries are the easier, original model. At compile time the linker puts all the functions from the library into your executable. You can ship the executable without the library, because the library is baked in.
Dynamic libraries (also called shared libraries) involve the compiler putting enough information in the executable that at runtime the linker will be able to find the correct libraries and call the methods in there. The libraries are shared across the whole system among the programs that use them. Using dynamic linking (dlsym(), et. al.) adds a few details to the picture.
How would I use third party libraries in my code (I'm staring at .dylib and .a files for the MySql C++ Connector)
That's going to depend on your platform, and unfortunately I can't tell you much about .dylib files. .a files are static libraries, and you simply need to add them to your final call to gcc (gcc main.c foo.a -o main if you know where foo.a is, or gcc main.c -lfoo -o main if the system knows where foo.a, foo.la, or foo.so are). Generally you make sure the compiler can find the library and leave the linker to do the rest.
The difference between a static and dynamic library is that the linking is done at compile time for static libraries, embedding the executable code into your binary, while for dynamic libraries linking is done dynamically at program start. The advantages are that the libraris can be separately distributed, updated and the code (memory) can be shared among several programs.
To use a library you simply provide -l to g++ for a lib.a or lib.so
I'm writing this to be more pragmatic than technically correct. It's enough to give you the general idea of what you're after.
Is writing them any different than a normal C++ program, minus the main() function?
For a static library, there's really not much difference.
For a dynamic library, the most likely difference you'll need to be aware of is that you may need to export the symbols you want to be available outside your library. Basically everything you don't export is invisible to users of your library. Exactly how you export, and whether you even need to by default, depends on your compiler.
For a dynamic library you also need to have all symbols resolved, which means the library can't depend on a function or variable that comes from outside the library. If my library uses a function called foo(), I need to include foo() in my library by writing it myself or by linking to another library that supplies it. I can't use foo() and just assume the user of my library will supply it. The linker won't know how to call a foo() that doesn't yet exist.
How does the compiled program get to be a library? It's obviously not an executable, so how do I turn, say 'test.cpp' into 'test.dll'?
It's similar to how you turn test.cpp into test.exe - compile and link. You pass options to the compiler to tell it whether to create an executable, a static library, or a dynamic library.
Once I get it to its format, how do I include it in another program?
In your source code, you include header files necessary to use the library, much as you would include a header file for code that's not in a library. You'll also need to include the library on your link line, telling the linker where to find the library. For many systems, creating a dynamic library generates two files, the shared library and a link library. It's the link library that you include on the link line.
Is there a standard place to put them, so that whatever compilers/linkers need them can find them easily?
There is an environment variable that tells the linker where to look for libraries. The name of that variable is different from one system to another. You can also tell the linker about additional places to look.
What is the difference (technically and practically) between a dynamic and static library?
A static library gets copied into the thing it is linked to. An executable will include a copy of the static library and can be run on another machine without also copying the static library.
A dynamic library stays in a separate file. The executable loads that separate file when it runs. You have to distribute a copy of the dynamic library with your program or it won't run. You can also replace the dynamic library with a new version, and as long as the new library has the same interface it will still run with the old executable. It also may save space if several executables use the same dynamic library. In fact dynamic libraries are often called shared libraries.
How would I use third party libraries in my code
Same as you would use one you created yourself, as described above.