Finding different ways to share C++ functions with different C++ applications [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm trying to find different ways to reuse my C++ functions in different applications. Say for example I have the following functions:
Function A(){} // this will do a complex math operation
Function B(){} // this will load a complex shape file
Function C(){} // Print the results.
I need to use the above 3 functions in 3 different C++ programs. They are completely independent and I'm trying to see what the best way is to use them in all of my applications rather than writing same code 3 times.
I am thinking about the following options:
Option A: Writing static library
Option B: Writing dynamic library
Option C: Windows Services
Option D: Same code and compile everywhere
Are there any other options? Or what would be the best option?

If the functions are only going to be called "in-house" by yourself and/or your co-workers (i.e. they aren't going to be exposed to people who don't have access to your source code repository) then option (D) is sufficient. Just keep the the .cpp and .h files in a single well-known sub-directory of your source code repository and have each application's project file reference them as necessary. This is simple to implement and gives you maximum flexibility (since each project can compile the shared .cpp files with different compiler-flags that best suit its own needs, if necessary -- with a library you'd have to figure out a single set of compiler flags that would work for all applications that want to link to the library, which isn't always convenient).
If you're writing an API for public consumption, OTOH, things get a little more complex, since after you release the code to the public you will no longer be in full control of which versions are getting used and where. In that case you will have to make a decision based on who your users are and what you think they would be most comfortable with.
Option C can probably be tossed out since it's overkill for this sort of thing, and carries the penalty of tying your code to a particular OS with no compensatory advantage.

It's option D (compile everywhere) all the way -- with the only exceptions being stand-alone libraries that are shared with many, many other people (or closed-source).
This makes it a lot easier to manage releases, because there really aren't any -- each copy of the library can be updated independently -- whenever is convenient.
This makes it easy for each project to debug into the library, with the particular version of the library that is in use.
This gives you the option of customizing the library for each project -- but use this capability judiciously to minimize merging complexity.
This choice is independent of whether or not you build the library it into a separate binary package as part of your build process.
I would recommend using something like git-submodules to manage the code -- except that the git-submodules feature is kind of half-baked.

Related

How are all the c++ functions finally defined? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
for example, we know that printf() function displays text in the console screen. But how are functions like printf() defined. Is it possible to write code to display text without the use of any library files? is assembly code used in defining these functions?
We can talk about C more easily, because it's a very basic language and really a little more higher over asm.
The answer is: system calls.
You could wonder: why? There are things that a language cannot do. And I/O is one of those. I/O streams are "owned" by the Operating System. It handles them.
The OS allows you to use them, but you must always rely on it before.
System calls are very basic: there are no format strings or whatever, for example.
Also you need to consider that system calls are OS-dependent. Windows' ones are different from Linux's ones.
puts implementation in the glibc
Since every library/code internally use the operating system calls provided by kernel.
So, It is possible to write your own printf like function without using c library.
If you want to know how these functions internally works, you can go for assembly language programming.
Is it possible to write code to display text without the use of any library files?
Yes of course it is. You might directly drive your display device, without any use of the standard functions.
is assembly code used in defining these functions?
Not necessarily, it can be completely accomplished in c or c++, without a single line of assembler code.
In the end it depends on the actual toolchain you are using to compile your programs, and the standard libraries that come with it, how these functions are defined. There are certain low level functions, you can 'override' for your concrete environment.
A common binding is to map the standard output interface (as used by printf()) to one of the UART interfaces of a MCU.
E.g for the commonly used newlib(c) coming with GCC toolchains here's some reference what's necessarily has to, and optionally can be ported to any environment: 'What steps do I need to do to port newlib to a new platform?'
Is it possible to write code to display text without the use of any library files?
Absolutely, since they have also done it, haven't they?
You would need to interface your kernel for writing low-level libraries like that, albeit it is not strictly necessary for each single case.
is assembly code used in defining these functions?
Yes, partially, using SIMD alike and other clever tricks for performance critical parts, etc.

How do you combine components of large c++ projects? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have a question about how large c++ projects with many components are supposed to be managed (I guess is the best term). For all intents and purposes I'm a beginning programmer. I understand the basics of compiling, header files, etc., but I've never really worked on anything bigger than homework assignments. So, let's take something like a game engine that has various components like a memory manager, renderer, physics simulation, and so on. How would one work on these components separately, but in a way that makes it easy to integrate back into the whole? For example, would you make a separate visual studio project for each piece with its own main? If you have one big project for everything, how would you work on one component without potentially another unfinished component making it fail every compile? I feel like I'm missing some major concept. Like, for projects with multiple programmers that have to check out portions to work on... do they grab all the code so they can compile, or do they set up their own temporary project to work on their bit? Both options sound wrong. You have to have a main function to compile right?
I would very much appreciate anyone educating me on this topic as I feel this is something i should have and just somehow missed completely.
When you are working with larger programs it is customary to have one source file with a main program and the rest (there can be many source files) are called from main. Then you need a build strategy. You can write a script file that compiles each of your source files and then links them all together. Unfortunately this can lead to long build times, so professional programmers use of make files which rebuild only the files that change.
As a further refinement, you can organize groups of sources into libraries and build the libraries separately and then link them with your remaining compiled source files.
Try looking up gmake (for linux) to see how to build larger projects. I guess you are using Microsoft VC++, in which case compiled files have .obj extensions and libraries .lib extensions. Microsoft have there own way of building libraries which is slighly more complicated than using gmake.
When you look further you'll come across shared libraries (dynamic link libraries on windows - DLLs).
This isn't really a great question for Stack overflows format. C++ does support language facilities for managing large code bases, like namespaces, classes, and header files. But your question seems to suggest a lack of perspective as to what they are for, or a limited understanding of the technical framework and process for contributing code to a software project. Which isn't a c++ specific issue.
When working on a living project, a primary concern is dealing with complexity. Or, in other words, reducing the number of things you have to think about at any one point in time. What that means is if another programmer is working on the user interface, ideally your code in the physics engine shouldn't have to change to reflect those changes. So interfaces, for forming abstractions and hiding information, are essential.
Granted I'm pretty green as well, so I can't give any real solid advice. I only mention this point to give some perspective as to how vague your question is. If I understand your question correctly, you might enjoy a book like Code Complete 2 by McConnell.
Large projects are separated into pieces. Normally, you should have the ability to compile each piece separately. The best practice that I know is to declare the interfaces among the various components, minimizing dependencies as close as possible to zero, and then building 'test' programs, which are small and serve two reasons: test a small piece of code, have main().
The directory structure is usually:
yourlib/
lib/
ext-inc/
test/
other dirs/
...
the lib contains the output library object (.a, .so)
the ext-lib contains the headers external code will use (sometimes called 'public' or just 'inc')
the test directory usually have a main.c (cpp) file and might have some more, as needed.
When you checkout(svn) / clone(git) / sync(p4) / etc. you would take everything, but work only on your area. once done, you merge/submit your changes into the main branch.

C++ Library Organization [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm building a very basic library, and it's the first project that I plan on releasing for others to use if they'd like. As such, I'd like to know what some "best practices" as far as organization goes. The only thing that may be unique about my project is that in order for it to be used in a project, users would be required to extend certain abstract classes, which leads me to my first question:
A lot of libraries I've seen consist of a .a file and a single .h file. Is this best practice? Wouldn't it be better to expose all the public .h files so that users can choose which ones to include? If this is the preferred way of doing things, how exactly is it accomplished? What goes into that single .h file?
My second question involves dependencies. For example my current project relies on OpenGL, GLFW, and GLEW. Should I package those in some way with my project, or just make it the user's responsibility to ensure that they are installed?
Edit: Someone asked about my target OS. All of my dependencies are cross platform so I'm (perhaps naively) hoping to make my library cross platform as well.
Thanks for any and all help!
It really depends on the circumstances. If you have some fairly complex functionality, that are in a number of closely related functions, then one header is the right solution.
E.g. you write a set of functions that draw something to the screen, and you need a few functions to confgiure/set up the environment, a few functions to define and place objects in the scene, a few functions to do the actual drawing/processing, and finally teardown, then using one header file is a good plan.
In the above case, it's also possible to have one "overall" header-file that includes several smaller ones. Particularly if you have fairly large classes, sticking them all in one file gets rather messy.
On the other hand, if you have one set of functions that deal with gasses dissolved in liquids, another set of functions to calculate the strength/load capacity of a steel beam, and another set of functions to calculate the friction of a rubber tyre against a roadsurface, then they probably should have different headers - even if it's all feasible functionality to go in a "Physics/mechanics library".
It is rarely a good idea to supply third party libraries with your library - yes, if you want to offer two downloads, one with the "all you nead, just add water", and one "bare library", that's fine. But I don't want to spend three times longer than necessary to download your library, simply because it also contains three other libraries that your code is using, which is already on my machine. However, do document what libraries are needed, and what you need to do to install them on your supported platforms (and what the supported platforms are). And what versions of libraries you have tested - there's nothing worse than "getting the latest", only to find that the version something needs is two steps back...
(And as Jason C points out, licensing gets very messy once you have a few different packages that your code depends on, because your license then has to be compatible with ALL the other licenses - sometimes that's not even possible...)
You have options and it really depends on how convenient you choose to make it for developers using your libraries.
As for the headers, the general method for libraries of average complexity is to have a single header that a developer can include to get everything they need. A good method is, if you have multiple headers, create a single header with the same name as your library (not required, just common) and have it #include all the individual headers. Then distribute the single header and individual headers. That way your users have the option of #including just one to get everything, or #including individual ones if necessary.
E.g. in mylibrary.h:
#ifndef MYLIBRARY_H
#define MYLIBRARY_H
#include <mylibrary/something.h>
#include <mylibrary/another.h>
#include <mylibrary/lastone.h>
#endif
Ensure that your individual headers can be included standalone (i.e. they #include everything they need) if you want to provide that option to developers.
As for dependencies, you will want to make it the user's responsibility to ensure they are installed. The user is compiling their code and linking to your library, and so it is also the user's responsibility to link to dependent libraries. If you package third-party dependencies with your library you run many risks:
Breaking user's systems who already have dependencies installed.
As mentioned in Mats Petersson's answer, forcing users to download dependencies they already have.
Violating licensing rights on third-party libraries.
The best thing for you to do is clearly document the required dependencies.
For this there are not really standard "best practices". Any sane practice would be a good practice.

Statically link a private library into a public one to hide symbols [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Consider the following:
I am developing a static library X in C++ that, internally, uses the famous static library Y v2.0;
I want to distribute only one library X', that is X with Y statically linked/merged for its internal use;
A developer wants to use X' in his executable;
Also, he needs Y v1.0 (not v2.0, as I do);
Y v1.0 and v2.0 has some symbols in common, and some of these common symbols also behave differently.
I developed X with the strict requirement to use Y v2.0 for some it's internal business. This is to say that I cannot by any means revert to Y v1.0.
On the other side, the developer has similar restrictions to use Y v1.0.
As you can already argue, the question is: how can I link Y inside X without exporting Y symbols to avoid collisions?
Y is well established, and possibly I'd not want to modify its source code or build settings (if publicly available).
To put things more onto Earth, I am in the process of designing an SDK that will for sure need some 3rd party libraries, let's say zlib.
In my development I'll rely on zlib v1.2.3.4.5.rc6 because I extensively and successfully used and tested it, and I cannot afford the SDK testing/fixing required if I change version.
All the statically or dinamically linked libraries the SDK will offer must hide the 3rd party static ones.
The potential customer could undergo similar restrictions (he needs zlib v7.8.9), so how can I avoid symbol collisions? Again, possibly without changing the original source code (namespacing etc.).
To complicate things, the SDK is multiplatform, implying I'd need different ways to solve the problem depending on the platform (Windows, Linux, Mac OS, iOS, Android, ...) and compiler used (e.g., MSVC++ and g++).
Thank you.
Update
It seems I am VENDOR2 of this question:
Linking with multiple versions of a library
bstpierre's answer seems a viable solution, but I'm not sure it works or if it can be reproduced on OSes other than *nix.
I've had this problem many times with static libs, most recently with MSVCRT. With a single executable, the One Definition Rule gets in the way, as one commenter points out. There's really no way around this, that I can think of, short of patching binaries. And you'd have to do this "deeply" - catching all internal references that static library Y (zlib) makes to its own external-linkage objects.
In this case, I'd suggest using a dynamic library (DLL or SO). It will add a bit of deployment complexity. But it provides an executable "firewall", permitting global objects with the same name to reside in each binary without colliding. Even so, it can pose problems if both app and DLL have conflicting third-party dependencies. Still, probably the best option.

Tool to parse C++ source and move in-header inline methods to the .cpp source file? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
The source code of our application is hundreds of thousands of line, thousands of files, and in places very old - the app was first written in 1995 or 1996. Over the past few years my team has greatly improved the quality of the source, but one issue remains that particularly bugs me: a lot of classes have a lot of methods fully defined in their header file.
I have no problem with methods declared inline in a header in some cases - a struct's constructor, a simple method where inlining measurably makes it faster (we have some math functions like this), etc. But the liberal use of inlined methods for no apparent reason is:
Messy
Makes it hard to find the implementation of a method (especially searching through a tree of classes for a virtual function, only to find one class had its version declared in the header...)
Probably increases the compiled code size
Probably causes issues for our linker, which is notoriously flaky for large codebases. To be fair, it has got much better in the past few years, but it's not perfect.
That last reason may now be causing problems for us and it's a good reason to go through the codebase and move most definitions to the source file.
Our codebase is huge. Is there an automated tool that can do (most of) this for us?
Notes:
We use Embarcadero RAD Studio 2010. In other words, the dialect of C++ includes VCL and other extensions, etc.
A few headers are standalone, but most are paired with a corresponding .cpp file, as you normally would. Apart from the extension the filename is the same, i.e., if there are methods defined in X.h, they can be moved to X.cpp. This also means the tool doesn't have to handle parsing the whole project - it could probably just parse individual pairs of .cpp/.h files, ignore the includes, etc, so long as it could reliably recognise a method with a body defined in a class declaration and move it.
You might try Lazy C++. I have not used it, but I believe it is a command line tool to do just what you want.
If the code is working then I would vote against any major automated rewrite.
Lots of work could be involved fixing it up.
Small iterative improvements over time is a better technique as you will be able to test each change in isolation (and add unit tests). Anyway your major complaint about not being able to find the code is not a real problem and is already solved. There are already tools that will index your code base so your editor will jump to the correct function definition without you having to search for it. Take a look at ctags or the equivalent for your editor.
Messy
Subjective
Makes it hard to find the implementation of a method (especially searching through a tree of classes for a virtual function, only to find one class had its version declared in the header...)
There are already tools available for finding the function. ctags will make a file that allows you to jump directly to the function from any decent editor (vim/emacs). I am sure your editor if nto one of these has the equivalent tool.
Probably increases the compiled code size
Unlikely. The compiler will choose to inline or not based on internal metrics not weather it is marked inline in the source.
Probably causes issues for our linker, which is notoriously flaky for large codebases. To be fair, it has got much better in the past few years, but it's not perfect.
Unlikely. If your linker is flakey then it is flakey it is not going to make much difference where the functions are defined as this has no bearing on if they are inlined anyway.
XE2 includes a new static analyzer. It might be worthwhile to give the new version of C++Builer's trial a spin.
You have a number of problems to solve:
How to regroup the source and header files ideally
How to automate the code modifications to carry this out
In both cases, you need a robust C++ parser with full name resolution to determine the dependencies accurately.
Then you need machinery that can reliably modify the C++ source code.
Our DMS Software Reengineering Toolkit with its C++ Front End could be used for this. DMS has been used for large-scale C++ code restructuring; see http://www.semdesigns.com/Company/Publications/ and track down the first paper "Case Study: Re-engineering C++ Component Models Via Automatic Program Transformation". (There's an older version of this paper you can download from there, but the published one is better). AFAIK, DMS is the only tool to have ever been applied to transforming C++ on large scale.
This SO discussion on reorganizing code addresses the problem of grouping directly.