Installing GMP math library - c++

How do I use GMP in Codeblocks in Windows? I have looked all over the internet and have found no instructions on how to do so.
All the instructions I have found are for VC++ and are completely out of date.
The instructions in the readme are all for linux, making it useless for me. windows does not know how to run a .autoconfig file and neither do I.
There is no folder to add into my search directories for the compiler, unless it is the entire gmp folder itself. There is no lib or includes folder anywhere
Also, there is no "gmp.h" file anywhere in the downloaded tar.gz or tar.bz, so I have no idea how to include it in my programs
What do I do?

Check out MPIR. This is a fork of GMP with one of the stated aims being better Windows support.
In my experience a, they're also a much more open group to deal with, both for users of their library and developers wishing to contribute.
In other words, even if they don't have explicit documentation for Code::Blocks, you'll get a lot more help on the subject from them.
a: Keep in mind that this is from only a couple of messages on the forums when I tried to suggest some changes (stopping it from violently exiting on out-of-memory conditions, a fatal flaw in my opinion for a general purpose library). While the GMP head honcho seemed to dismiss other opinions as raving lunacy, the MPIR folks went out of their way to track me down and talk through the possibilities.

Related

Trouble installing C++ libraries from Github

I've used C++ for several courses in university but the libraries we used in these courses were quite simple to install. When we came across libraries which required a bit more complex installation, our teachers always provided detailed instructions.
This time I'm doing a project on my own. I have downloaded libraries for my own projects in the past. Sometimes the install was easier, sometimes I had to search for hours to do things most people here would do in 5 minutes.
I did some studying to remember some stuff about the compiling process, what dynamic and static libraries are, etc. I feel I understand the most part but it didn't really help me with my efforts to install some libraries. Some people in Github provide instructions for newbies like me, but others, understandably, don't.
I'm saying a lot so I'll get to the point. I will provide links for the library I'm trying to install but in case I'm not allowed to, please let me know so I can rewrite this in a way that is allowed.
I'm trying to install libccd so I can then install fcl. In the instructions 3 ways are mentioned:
Using Makefile
Using autotools
Using Cmake
In all 3 methods, I see specific commands like this :
First of all, where am I supposed to write these commands?
I don't even know how to search this. So, I tried installing with methods I've previously used.
Here are the steps I took:
Downloaded the whole repository.
Made a solution for Visual Studio with Cmake-gui (that's all I know how to do with Cmake,unfortunately it usually was enough so I never learnt more)
Compiled the code in Release mode, which should give me a lib file.
Now,in my own project, I added the relevant include directories of the repository I downloaded and the library directory for the lib file which was produced.
I'm not sure about my last step. I'm pretty sure if I did the installation as proposed in the link(and as required by fcl), I would have a new folder for the library including just an include folder and a lib folder. Now I'm just searching for the correct include files and the lib file in subfolders of subfolders hoping I include the right ones. I'm a bit lost.
I'd really like some general steps(if there are) to installing a library. I know each library has its own ways but I assume the general idea with Cmake or Makefile should be the same. I have tried searching online but I didn't come across a good or detailed enough tutorial. I really don't wanna waste any more days trying to install libraries and I don't want to end up asking here again.
UPDATE:
It looks like there is no standard way to install libraries. Since I'm getting 'close votes' I'll include a specific question:
Are the steps I took correct? What should I do next?
I'm trying to install libccd so I can then install fcl. In the instructions 3 ways are mentioned:
Using Makefile
Using autotools
Using Cmake
In all 3 methods, I see specific commands like this :
First of all, where am I supposed to write these commands?
These instructions are for a UNIX-like system. Makefiles, autotools... these are UNIX things. You'd typically write those build commands into a console window on something like Linux or a Mac.
CMake is a bit more cross-platform. You can find out how to invoke CMake on other SO questions.
You can get Makefiles and such to work on Windows, using projects like Cygwin and MSYS. I'd generally recommend you avoid that unless you really need it.
I'm not sure about my last step. I'm pretty sure if I did the installation as proposed in the link(and as required by fcl), I would have a new folder for the library including just an include folder and a lib folder. Now I'm just searching for the correct include files and the lib file in subfolders of subfolders hoping I include the right ones. I'm a bit lost.
This is where no "general" advice exists. Different authors put their output in different places. If their instructions didn't include this information, you're already doing the right thing. If you get really stuck, you can always just ask them.
I'd really like some general steps(if there are) to installing a library.
No such thing exists, but where these industry-standard tools are involved, you can usually go on general documentation for those tools, or from memory.
I also suggest you shop around for a general book on programming, as general principles should be covered in any good one of those.

How should/could/must I handle the dll that my C++ projects depend on?

I'm lost here and I have no clue how to proceed. This is not a question about how to make my program work, this is a question about how to stop wasting my time.
My programming environment is Visual Studio 2013 on windows, in C++.
I use 3 libraries extensively, namely: boost (using dynamic linking), OpenCV, and Qt.
During the development, I have configured VS to look at those 3 libraries by default for include and .lib. I have also added the 3 folders containing all the dlls to my PATH environment variable.
It works, but it is sometime painful, let me explain you when.
First hassle: Anytime I have a LNK error telling me I miss a function, it is usually on OpenCV since it has only one include file referencing all the functions. I have to look at OpenCV's source code to see what module this function belongs to and to know what I must link my program to.
Second Hassle: When comes the time to deploy my application, I have to ship it with all the relevant dlls. To know which one I need, I open dependency walker and try to forget nothing, I have then to test it on a different computer because 102% of the time I have missed a couple, and then I have to configure my Installer generator to include all those one by one.
Third Hassle: To ease a little bit the process of configuring a new development machine, I have recently switched to NuGet. It is great, I add boost with a couple of clicks to any project. But now my boost DLLs are everywhere, I have one folder per boost library, and since there are dozens of those I can't even add them all at once to my PATH now, so I have to move them manually to the appropriate folder, and that is really not what I want to do with my not-so-precious-but-who-are-you-to-judge time
I have looked around and couldn't find any good practice regarding this issue, maybe because they are too obvious, or too specific to a particular setup.
How do you do? How would you do if you were me?
We put all our external dependencies in version control along with the code. This ensures that all code can build "out of the box" on any of our development machines and also ensures that for any given version of the code, we know exactly which dependencies is has.
The best way to check for missing dependencies is how have a good automated test suite, if you've got comprehensive converge then if your tests pass you must have deployed the required libraries.
In terms of linking to the appropriate libraries, unfortunately, that just sounds like an issue with the structure of OpenCV (I'm not familiar with OpenCV). I tend to use dumpbin under Windows and nm under Linux to easily grep for symbols when I get link errors with an unfamiliar library.

C++ Libraries ecosystem using CMake and Ryppl

I am interested in building a cross-platform C++ Library and distributing it in source form. I want the consumers of this library to be able to acquire it, build it and consume it inside their software very easily on whatever platform they are working on and for whatever platform they are targeting. At the same time while building my library, I also want to be able to consume other popular OSS libraries through a similar mechanism.
I see that CMake and Ryppl were created with these intentions in mind and to some extent they do solve some of these problems, especially the build problem. But I don't quite know how exactly to go about achieving the above mentioned goals. Is it OK to settle on CMake as the build solution? How do I solve the library acquisition and distribution problem? Simply host the sources somewhere and let people discover, download and build them? Or is there a better way?
At the time of writing there is no accepted solution that handles everything you want. CMake gives you cross-platform builds and git (with submodules) gives you a way to manage source-level dependencies if all other projects are using CMake. But, in practice, many common dependencies you project wil need don't use CMake, or even Git.
Ryppl is meant to solve this, but progress is slow because the challenge is such a hard one.
Arguably, the most successful solution so far is to write header-only libraries. This is common practice and means your users just include your files, and their build system of choice takes care of everthing. There are no binary dependencies to manage.
TheHouse's answer is still essentially true. Also there don't seem to have been any updates to ryppl itself for a while (3 years) and the ryppl.org domain has expired.
There are some new projects aiming to solve the packaging issue.
Both build2 and wrap from mesonbuild have that goal in mind.
A proposal was made recently to add packages to the c++ standard which may open up the debate (reddit discussion here).
Wrap looks promising as meson's author has learned from cmake.
There is a good video when its author discussing this here.
build2 seems more oblivious (and therefore condemned to reinvent). However both suffer from trying to solve the external project dependencies issue simultaneously with providing a complete build system.
conan.io is another recent attempt which doesn't try to provide the build system as well. Time will tell if any of these gain any traction.
The accepted standard for packaging C and C++ projects on Unix was always a source tarball + a configure script (autotools) + make.
cmake is now beginning to replace autotools as your first choice.
It is able create RPMs and tarballs for distribution purposes.
Its also worth considering the package managers built into the various flavours of Linux. The easiest to build and install projects are those where most of the dependencies can be pulled in via yum or apt. This won't help you on windows of course. While there is a high barrier to entry getting your own projects added to the main Linux repositories (e.g. RedHat, Debian) there is nothing to stop you adding your maintaining your own satellite repo.
The difference between that and just hosting your project on github or similar is you can provide pre-built binaries for a number of popular systems.
You might also consider that configure times checks (e.g. from cmake findLibrary()) and your own documentation will tell people what needs to be installed as a prerequisite and providing you don't make it too onerous that might be enough.

What are the dusty corners a newcomer to CMake will want to know?

I've done a lot of projects and a lot of different build systems and CI tools. Most recently, I've been exposed to the occasionally challenging task of adding to an autotools based environment for a reasonably sized C++ application. While I love the ease of use for the end user, I'm not so fond of dealing with m4 and all of the auto* tools from the developer side.
I'm working on a reasonably large side project in my spare time and have decided that I'd like to take CMake for a test drive. Since I'm just starting out, I'm obviously planning on digging through the documentation, FAQ, wikis, etc. and learning by doing. BTW, I'd fork over money for the "Mastering CMake" book, but the comments that I found on Amazon were enough to make me decide that it probably isn't worth the money. All that being said, in anything new there are often "gotchas" that a newcomer will often stumble over that the old pros have long since learned to avoid. I'm wondering what those are based on people's experience with CMake, and I'm hoping to cut my learning pains down a bit by asking here.
I should point out that I'm planning on building on Linux primarily, and other UN*X variants. Windows isn't really a concern from my POV. This is a large server-side application, with a Web and CLI interface for operators, a northbound REST interface for automation/integration with OSS tooling, and a SOAP southbound interface for CPEs. I'm going to need a lot of third-party libraries and applications to get this all to work unless I want to take the next 10 years to build this all by hand. :)
First of all, I think CMake is an excellent build-tool. It has in my opinion by far the best support for multi-platform-builds and has a powerful mechanism for finding 3rd-party libraries. Combined with CPack it even provides reasonable options for packaging and installation. Some hints and possible problems:
Always aim for out-of-source builds: An obvious one: It can be difficult for a project designed for in-source builds to get it built out-of-source. So, if you can design it from the start, aim for out-of-source builds.
Syntax: The syntax can be bizarre with a lot of strange quirks, although this is getting better since the 2.6 and 2.8 versions.
Caching of variables. CMake keeps track of a cache of variables and settings and sometimes this can be a problem when rebuilding something. Try to remove the CMakeCache.txt (or clean your out-of-source build-directory) and rebuild. This one is also mentioned by DarenW.
Finding and configuring 3rd-party libraries: If you have a large project depending on several libraries (your own or 3rd-party), this will likely be the most troublesome.
I seriously recommend diving into the find_package command. It is very powerful, but fully depends on the quality of the FindXXX.cmake files. Especially when building on multiple platforms (mostly Windows and Mac), these files may require some tweaking or you may need to write your own.
Problems with linking libraries like debugging "undefined references" or mismatches between debug|release or static|shared can be difficult to debug. Especially with 3rd-party libraries these problems may be caused by incorrect 3rd-party FindXXX.cmake files...
One thing that took me a while to figure out: if anything goes haywire with a build because I've changed a compiler option, added/removed a .c file, had confusion with support library versions, etc. - it's best to delete the build tree and run CMake from scratch. Otherwise, even though CMake and make seem to do their job, the resulting executable crashes.
It could be we're "not doing it right", but that's becoming a common problem with many tools, languages, frameworks, etc. It's not possible for everyone to be expert at all the software they depend on, and "not doing it right" is the only way a lot of things get done even among professionals. CMake is pretty smart about a lot of things, but not everything. Nuking the build tree remains a commonly used technique on our project.

What are the differences between Autotools, Cmake and Scons?

What are the differences between Autotools, Cmake and Scons?
In truth, Autotools' only real 'saving grace' is that it is what all the GNU projects are largely using.
Issues with Autotools:
Truly ARCANE m4 macro syntax combined with verbose, twisted shell scripting for tests for "compatibility", etc.
If you're not paying attention, you will mess up cross-compilation ability (It
should clearly be noted that Nokia came up with Scratchbox/Scratchbox2 to side-step highly broken Autotools build setups for Maemo/Meego.) If you, for any reason, have fixed, static paths in your tests, you're going to break cross-compile support because it won't honor your sysroot specification and it'll pull stuff from out of your host system. If you break cross-compile support, it renders your code unusable for things like
OpenEmbedded and makes it "fun" for distributions trying to build their releases on a cross-compiler instead of on target.
Does a HUGE amount of testing for problems with ancient, broken compilers that NOBODY currently uses with pretty much anything production in this day and age. Unless you're building something like glibc, libstdc++, or GCC on a truly ancient version of Solaris, AIX, or the like, the tests are a waste of time and are a source for many, many potential breakages of things like mentioned above.
It is pretty much a painful experience to get an Autotools setup to build usable code for a Windows system. (While I've little use for Windows, it is a serious concern if you're developing purportedly cross-platform code.)
When it breaks, you're going to spend HOURS chasing your tail trying to sort out the things that whomever wrote the scripting got wrong to sort out your build (In fact, this is what I'm trying to do (or, rather, rip out Autotools completely- I doubt there's enough time in the rest of this month to sort the mess out...) for work right now as I'm typing this. Apache Thrift has one of those BROKEN build systems that won't cross-compile.)
The "normal" users are actually NOT going to just do "./configure; make"- for many things, they're going to be pulling a package provided by someone, like out of a PPA, or their distribution vendor. "Normal" users aren't devs and aren't grabbing tarballs in many cases. That's snobbery on everyone's part for presuming that is going to be the case there. The typical users for tarballs are devs doing things, so they're going to get slammed with the brokenness if it's there.
It works...most of the time...is all you can say about Autotools. It's a system that solves several problems that only really concerns the GNU project...for their base, core toolchain code. (Edit (05/24/2014): It should be noted that this type of concern is a potentially BAD thing to be worrying about- Heartbleed partially stemmed from this thinking and with correct, modern systems, you really don't have any business dealing with much of what Autotools corrects for. GNU probably needs to do a cruft removal of the codebase, in light of what happened with Heartbleed) You can use it to do your project and it might work nicely for a smallish project that you don't expect to work anywhere except Linux or where the GNU toolchain is clearly working correctly on. The statement that it "integrates nicely with Linux" is quite the bold statement and quite incorrect. It integrates with the GNU toolsuite reasonably well and solves problems that IT has with it's goals.
This is not to say that there's no problems with the other options discussed in the thread here.
SCons is more of a replacement for Make/GMake/etc. and looks pretty nice, all things considered However...
It is still really more of a POSIX only tool. You could probably more easily get MinGW to build Windows stuff with this than with Autotools, but it's still really more geared to doing POSIX stuff and you'd need to install Python and SCons to use it.
It has issues doing cross-compilation unless you're using something like Scratchbox2.
Admittedly slower and less stable than CMake from their own comparison. They come up with half-hearted (the POSIX side needs make/gmake to build...) negatives for CMake compared to SCons. (As an aside, if you're needing THAT much extensibility over other solutions, you should be asking yourself whether your project's too complicated...)
The examples given for CMake in this thread are a bit bogus.
However...
You will need to learn a new language.
There's counter-intuitive things if you're used to Make, SCons, or Autotools.
You'll need to install CMake on the system you're building for.
You'll need a solid C++ compiler if you don't have pre-built binaries for it.
In truth, your goals should dictate what you choose here.
Do you need to deal with a LOT of broken toolchains to produce a valid working binary? If yes, you may want to consider Autotools, being aware of the drawbacks I mentioned above. CMake can cope with a lot of this, but it worries less with it than Autotools does. SCons can be extended to worry about it, but it's not an out-of-box answer there.
Do you have a need to worry about Windows targets? If so, Autotools should be quite literally out of the running. If so, SCons may/may not be a good choice. If so, CMake's a solid choice.
Do you have a need to worry about cross-compilation (Universal apps/libraries, things like Google Protobufs, Apache Thrift, etc. SHOULD care about this...)? If so, Autotools might work for you so long as you don't need to worry about Windows, but you're going to spend lots of time maintaining your configuration system as things change on you. SCons is almost a no-go right at the moment unless you're using Scratchbox2- it really doesn't have a handle on cross-compilation and you're going to need to use that extensibility and maintain it much in the same manner as you will with Automake. If so, you may want to consider CMake since it supports cross-compilation without as many of the worries about leaking out of the sandbox and will work with/without something like Scratchbox2 and integrates nicely with things like OpenEmbedded.
There is a reason many, many projects are ditching qmake, Autotools, etc. and moving over to CMake. So far, I can cleanly expect a CMake based project to either drop into a cross-compile situation or onto a VisualStudio setup or only need a small amount of clean up because the project didn't account for Windows-only or OSX-only parts to the codebase. I can't really expect that out of an SCons based project- and I fully expect 1/3rd or more Autotools projects to have gotten SOMETHING wrong that precludes it building right on any context except the host building one or a Scratchbox2 one.
An important distinction must be made between who uses the tools. Cmake is a tool that must be used by the user when building the software. The autotools are used to generate a distribution tarball that can be used to build the software using only the standard tools available on any SuS compliant system. In other words, if you are installing software from a tarball that was built using the autotools, you are not using the autotools. On the other hand, if you are installing software that uses Cmake, then you are using Cmake and must have it installed to build the software.
The great majority of users do not need to have the autotools installed on their box. Historically, much confusion has been caused because many developers distribute malformed tarballs that force the user to run autoconf to regenerate the configure script, and this is a packaging error. More confusion has been caused by the fact that most major linux distributions install multiple versions of the autotools, when they should not be installing any of them by default. Even more confusion is caused by developers attempting to use a version control system (eg cvs, git, svn) to distribute their software rather than building tarballs.
It's not about GNU coding standards.
The current benefits of autotools — specifically when used with automake — is that they integrate very well with building Linux distribution.
With cmake for example, it's always "was it -DCMAKE_CFLAGS or -DCMAKE_C_FLAGS that I need?" No, it's neither, it's "-DCMAKE_C_FLAGS_RELEASE". Or -DCMAKE_C_FLAGS_DEBUG. It's confusing - in autoconf, it's just ./configure CFLAGS="-O0 -ggdb3" and you have it.
In integration with build infrastructures, scons has the problem that you cannot use make %{?_smp_mflags}, _smp_mflags in this case being an RPM macro that roughly expands to (admin may set it) system power. People put things like -jNCPUS here through their environment. With scons that's not working, so the packages using scons may only get serialed built in distros.
What is important to know about the Autotools is that they are not a general build system - they implement the GNU coding standards and nothing else. If you want to make a package that follows all the GNU standards, then Autotools are an excellent tool for the job. If you don't, then you should use Scons or CMake. (For example, see this question.) This common misunderstanding is where most of the frustration with Autotools comes from.
While from a developers point of view, cmake is currently the most easy to use, from a user perspective autotools have one big advantage
autotools generate a single file configure script and all files to generate it are shipped with the distribution. it is easy to understand and fix with help of grep/sed/awk/vi. Compare this to Cmake where a lot of files are found in /usr/share/cmak*/Modules, which can't be fixed by the user unless he has admin access.
So, if something does not quite work, it can usually easily be "fixed" by using Standard Unix tools (grep/sed/awk/vi etc.) in a sledgehammer way without having to understand the buildsystem.
Have you ever digged through your cmake build directory to find out what is wrong? Compared to the simple shellscript which can be read from top to bottom, following the generated Cmake files to find out what is going on is quite difficult. ALso, with CMake, adapting the FindFoo.cmake files requires not only knowledge of the CMake language, but also might require superuser privileges.