Raspberry Pi C++ Header Documentation - c++

Is there some sort of documentation for what kind of headers you can include in c++ files when writing programs for Raspberry Pi or linux in general?
For instance I found this great guide on how to access the SPI bus from the Pi using c++ (http://hertaville.com/2013/07/24/interfacing-an-spi-adc-mcp3008-chip-to-the-raspberry-pi-using-c/)
I was able to take the code and apply it to my situation and was succesfully able to talk to an nRF24L01+ RF module and I am able to command the chip etc.
But as I started trying to investigate what the code does (because I like to know how code that I get from the internet works) I get lost very quickly. For instance how did the author of that code know to include the header files he did:
#include <unistd.h>
#include <stdint.h>
#include <fcntl.h>
#include <sys/ioctl.h>
#include <linux/spi/spidev.h>
#include <stdio.h>
#include <errno.h>
#include <stdlib.h>
#include <string>
#include <iostream>
I know what the ones such as "iostream" do but I thought I would approach it by Googling those header file names such as ("unistd.h") but no luck. I found lots of info but none pertaining to the Pi, and the little bit I did only started referencing other header files and code. Is this too much to try and learn, like would I effectively be trying to learn the linux kernel? Are there any good books for this type of stuff?
And back to my original question is there any kind of online (or offline for that matter) documentation for what header files you can include in your c++ projects on the Pi and what functionality they all add?
I found this (http://www.cplusplus.com/reference/) which has the standard files, but how do you know about all the non-standard header files and corresponding functionality?
All thoughts and help are appreciated, thanks!
Wesley
Edit 1
Here is the image showing the output of the "ls /usr/include" command:

TL/DR:
I've tried to have a general introduction to this topic below. If you are a more hands-on type and want to skip the wall of text, jump to the end. There are some tutorial links down there. Jump in -- getting stuck leads to the kind of questions Stack Overflow is best at.
Headers vs Libraries in C/C++
There is an important distinction to be made between header files and libraries in C++. Header files are the visible thing in your code, since they are what you actually mention in an #include statement. In most cases, though, the headers you are including correspond to libraries installed on the system.
As a practical matter this is important for two reasons:
You don't generally install "headers" on your system, you install libraries that happen to come with headers. There are a small number of header-only libraries that are exceptions to this rule, but usually you have a binary library somewhere that the header is facilitating access to.
The #include statement is only half of the story. There is usually a corresponding compiler option where you need to specify that you want to link against a particular library. In an IDE, this will be buried in the project options somewhere. For a command line compiler, this will be a switch that you pass to the compiler on the command line or (more commonly) in a Make file or similar.
That second point is actually true of your standard libraries like iostream or stdio.h, but these are backed by the standard C or C++ library that is linked in by default.
Linux in general
Most linux distributions will come with a package manager of some kind. There are a number available (Ubuntu uses Apt, Redhat uses yum, Arch has pacman, Gentoo has portage, etc.) The actual manager used is one of the defining properties of the distribution. Documentation will be easy to find on your Distro's web page. This is a very important tool to understand.
With the exception of the various C/C++ and Posix standard headers, the headers you have available for use are a function of the libraries you have installed on your system. This is important to understand because the list of available headers consists of all the available libraries on the internet, not just the few that your system happens to have installed at the moment.
Each library will generally be wrapped up as a package for your linux distro. When you locate a library that you want, you install the corresponding package. This will give you the required header and library files.
It actually isn't often useful to go looking for the libraries and headers on your hard drive, but if you are curious, header files conventionally end up somewhere in one of the following directories (or a subdirectory inside these)
/usr/local/include
/usr/include
Libraries will mostly be found in
/lib
/usr/lib
/usr/local/lib
These will have cryptic names that include their version number, and a more general (still cryptic) name that symlinks to the one with the specific version number.
Some distributions have separate "development" versions of libraries that include the headers, and only install the runtime files by default (ie the files your users need to run your program). If your distro does this, you'll need the development package to write software with that library.
When you have decided on what functionality you require, you generally go looking for a library that will help you accomplish that task. You can ask around on forums, or just google for them.
Device drivers in the Kernel
Most libraries will interface with a device through a device driver. In linux, device drivers are compiled into the kernel, or present as modules that are loaded into the kernel. Your Pi distribution will hopefully have come with all the required drivers for the hardware present. If not, you'll need to obtain a kernel module or recompile your kernel to include the required driver. The modules and appropriate scripts to load/unload them are generally available as packages for your distro, just as libraries are.
It is possible to write software to talk to a driver directly. This is a VERY broad topic. Your best bet is to pick a device (ie I2C, SPI, etc) and google for a tutorial for interfacing with that device on the Pi specifically.
This tutorial addresses the basics of writing a loadable module. It would be a good place to start if you want to write your own SPI driver.
This is a good place to go for a general kernel overview. It will help you understand what is available, how to get a copy of the kernel source, etc. This is also good knowledge to have if you want to write a driver. It is also a place to learn how to get your code submitted to the kernel, if you develop something new.
Finally, writing your own device drivers is possible, and isn't something to be scared of. The details of this topic that could fill a book, though, so it is something to google when you are ready to try it.
Linux on the Raspberry Pi
The first thing to understand about the Pi is that it is in most ways no different from a PC running linux. Any general information you find about systems programming for linux on a PC will apply equally to the Pi. The only caveats are that the processor architecture is different (ARM, vs Intel/AMD), and the Pi has a few hardware items (like I2C, SPI and GPIO) that are not common, or at least not commonly interfaced with, on the PC.
There is actually more than one linux distribution available on the Pi. These are usually derived from common PC distributions -- Ubuntu derived distros are most common. You'll want to locate the website for whichever distro you have.
If you try to install things outside of your package manager, you'll need to be careful to get libraries compiled for the ARM processor (or source libraries that you compile yourself). There are a few exceptions, but the vast majority of open source libraries should be usable on ARM.
This looks like a promising library that might be a good starting point.
This looks like a good GPIO (General Purpose Input/Output -- ie, pins you can toggle) tutorial.
This leads to some some SPI sample code.

Related

What's the difference between installing boost on the system and paste it locally in the project?

I have a C++ project that I share with some colleagues. The development operating systems are different among us, usually divided between MacOS and different Linux distros.
We already use different libraries, that we use to paste in the lib folder and they are ready to use for us.
We need to use Boost and, for some reason, it looks like the way it works is different by other libraries and it needs to be installed on the system, like this question asks.
It looks like that installing Boost on the system is a de-facto standard, and many people give it for granted, even if I didn't see any reference to it and I don't see any good reason to do it, since it makes the source code less portable because of external dependencies and different IDE configurations. While having an IDE-independent configuration would actually make more sense.
So what are the advantages of one way over the other?
What's the difference between installing boost on the system and paste it locally in the project?
If you install a library on the system, then it will be found from the default include / library archive directories, and the compiler / build system will find the library without special configuration. If you install a library elsewhere, then you need to configure the compiler / build system to find the headers and the archives from where you've copied them.
The way to install library can vary across different systems.
it looks like the way [Boost] works is different by other libraries and it needs to be installed on the system
No, Boost is not special in this regard. It can be installed in a system directory, and elsewhere.
Regardless of where you install the library, I recommend using a package manager.
If you install using the system package manager, then you will be limited to the version provided by the system. This is an advantage when targeting that particular system, but a potential problem when developers use a variety of systems.
A system package is often better because it is adapted to the specific environment of the given system by the packager. Moreover, using a system package means the application is more likely to work correctly on users’ systems because then you can package it, getting the runtime library as a dependency.

Releasing a program

So I made a c++ console game. Now I'd like to "release" the game. I want to only give the .exe file and not the code. How do i go about this. I'd like to make sure it will run on all windows devices.
I used the following headers-
iostream
windows.h
MMSystem.h
conio.h
fstream
ctime
string
string.h
*I used namespace std
*i used code::blocks 13.12 with mingw
& I used the following library-
libwinmm.a
Thank you in advance
EDIT
There are many different ways of installing applications. You could go with an installer like Inno or just go with a regular ZIP file. Some programs can even be standalone by packaging all resources within the executable, but this is not an easy option to my knowledge for C++.
I suppose the most basic way is to create different builds for different architectures with static libraries and then find any other DLLs specific to that architecture and bundle it together in one folder. Supporting x86/x86-64/ARM should be enough for most purposes. I do know that LLVM/Clang and GCC should have extensive support for many architectures, and if need be, you should be able to download the source code of the libraries you use and then compile them for each architecture you plan to support as well as the compilation options you need to compile to each one.
A virtual machine can also be helpful for this cross-compilation and compatibility testing.
tldr; Get all the libraries you need in either static or dynamic (DLL) format. Check that they are of the right architecture (x86 programs/code will not run on MIPS and vice versa). Get all your resources. Get a virtual machine, and then test your program on it. Keep testing until all the dependency problems go away.
Note: when I did this, I actually had some compatibility issues with, of all things, MinGW-w64. Just a note; you may need some DLLs from MinGW, or, if you're using Cygwin, of course you need the Cygwin DLL. I don't know much about MSVC, but I would assume that even they have DLLs needed on some level if you decide to support an outdated Windows OS.

Cross Platform Library with one file for all

Is there a way to compile a cross platform library to work on all platforms with the single file?
So what I mainly (only?) see is that Windows uses DLL and other platforms each use a different file.
Why are these not standardised? Is there a standard format that can be used instead? If not can this be faked?
Sorry about multiple questions but answering one should invalidate the others.
Libraries contain compiled code, which is actually specific to the architecture of the platform. Since there is no standard agreement between big players on machine architecture, the unfortunate result is that the libraries are not portable across platforms.
The best way is to open source the code and let the users compile the code on the platform they want.
The second best option is to go Java way. Distribute your library in form of jar file containing the class files. And let the users install the right JRE for their platform.
I am not aware of any other options unfortunately.
IDK why object files aren standartized (although you can use GCC for crosscompilation afaik), but the only viable guaranteed crossplatform solution as for right now is source (as far as i know). For example CImg ships as single header file (40kb), but it has some dependencies, it needs backend image processing library/toolchain. Although im not quite sure, maybe there are cross-platform static object formats.

Platform C Preprocessor Definitions

I'm writing a small library in C++ that I need to be able to build on quite a few different platforms, including iPhone, Windows, Linux, Mac and Symbian S60. I've written most of the code so that it is platform-agnostic but there are some portions that must be written on a per-platform basis.
Currently I accomplish this by including a different header depending on the current platform but I'm having trouble fleshing this out because I'm not sure what preprocessor definitions are defined for all platforms. For windows I can generally rely on seeing WIN32 or _WIN32. For Linux I can rely on seeing _UNIX_ but I am less certain about the other platforms or their 64-bit variants. Does anyone have a list of the different definitions found on platforms or will I have to resort to a config file or gcc parameter?
I have this sourceforge pre-compiler page in my bookmarks.
The definitions are going to be purely up to your compiler vendor. If you are using the same compiler (say, gcc) on all your platforms then you will have a little bit easier time of it.
You might also want to try to instead organize your project such that most of the .h files are not platform dependent. Split your implementation (cpp files) into separate files; one for the nonspecific stuff and one for each platform. The platform specific ones can include 'private' headers that only make sense for that platform. You may have to make adapter functions to get something like this to work 100% (when the system libs take slightly differed arguments) but I have found it to be really helpful in the end, and bringing on a new platform is a whole lot easier in the future.
Neither the C nor the C++ standards define such symbols, so you are going to be at the mercy of specific C or C++ implementations. A list of commonly used symbols would be a useful thing to have, but unfortunately I haven't got one.
I don't think there exists a universal list of platform defines judging by the fact that every cross-platform library I have seen has an ad-hoc config.h full of these stuff. But you can consider looking at the ones used by fairly portable libraries like libpng, zlib etc.
Here's the one used by libpng
If you want to look through the default preprocessor symbols for a given system on which you have GCC (e.g. Mac OS X, iOS, Linux), you can get a complete list from the command-line thus:
echo 'main(){}' | cpp -dM
These are often of limited use however, as at the stage of the compilation at which the preprocessor operates, most of the symbols identify the operating system and CPU type of only the system hosting the compiler, rather than the system being targeted (e.g. when cross-compiling for iOS). On Mac OS X and iOS, the right way to determine the compile-time characteristics of the system being targeted is
#include <TargetConditionals.h>
This will pick up TargetConditionals.h from the Platform and SDK currently in use, and then you can determine (e.g.) endianness and some other characteristics from some of the Macros. (Look through TargetConditionals.h to see what kinds of info you can glean.)

Portable shared objects?

Is it possible to use shared object files in a portable way like DLLs in Windows??
I'm wondering if there is a way I could provide a compiled library, ready to use, for Linux. As the same way you can compile a DLL in Windows and it can be used on any other Windows (ok, not ANY other, but on most of them it can).
Is that possible in Linux?
EDIT:
I've just woke up and read the answers. There are some very good ones.
I'm not trying to hide the source code. I just want to provide an already-compiled-and-ready-to-use library, so users with no experience on compilation dont need to do it themselves.
Hence the idea is to provide a .so file that works on as many different Linuxes as possible.
The library is written in C++, using STL and Boost libraries.
I highly highly recommend using the LSB app / library checker. Its going to tell you quickly if you:
Are using extensions that aren't available on some distros
Introduce bash-isms in your install scripts
Use syscalls that aren't available in all recent kernels
Depend on non-standard libraries (it will tell you what distros lack them)
And lots, upon lots of other very good checks
You can get more information here as well as download the tool. Its easy to run .. just untar it, run a perl script and point your browser at localhost .. the rest is browser driven.
Using the tool, you can easily get your library / app LSB certified (for both versions) and make the distro packager's job much easier.
Beyond that, just use something like libtool (or similar) to make sure your library is installed correctly, provide a static object for people who don't want to link against the DSO (it will take time for your library to appear in most distributions, so writing a portable program, I can't count on it being present) and comment your public interface well.
For libraries, I find that Doxygen works the best. Documentation is very important, it surely influences my choice of library to use for any given task.
Really, again, check out the app checker, its going to give you portability problem reports that would take a year of having the library out in the wild to obtain otherwise.
Finally, try to make your library easy to drop 'in tree', so I don't have to statically link against it. As I said, it could take a couple of years before it becomes common in most distributions. Its much easier for me to just grab your code, drop it in src/lib and use it, until and if your library is common. And please, please .. give me unit tests, TAP (test anything protocol) is a good and portable way to do that. If I hack your library, I need to know (quickly) if I broke it, especially when modifying it in tree or en situ (if the DSO exists).
If you'd like to help your users by giving them compiled code, the best way I know is to give them a statically linked binary + documentation how they can run the binary. (This is possibly in addition to giving the source code to them.) Most statically linked binaries work on most Linux distributions of the same architecture (+ 32-bit (x86) statically linked binaries work on 64-bit (amd64)). It is no wonder Skype provides a statically linked Linux download.
Back to your library question. Even if you are in an expert in writing shared libraries on Linux, and you take your time to minimize the dependencies so your shared library would work on different Linux distributions, including old and new versions, there is no way to ensure that it will work in the future (say, 2 years). You'll most probably end up maintaining the .so file, i.e. making small modifications over and over again so the .so file becomes compatible with newer versions of Linux distributions. This is no fun doing for a long time, and it decreases your productivity substantially: the time you spend on maintaining the library compatibility would have been much better spent on e.g. improving the functionality, efficiency, security etc. of the software.
Please also note that it is very easy to upset your users by providing a library in .so form, which doesn't work on their system. (And you don't have the superpower to make it work on all Linux systems, so this situation is inevitable.) Do you provide 32-bit and 64-bit as well, including x86, PowerPC, ARM etc.? If the .so file works only on Debian, Ubuntu and Red Hat (because you don't have time time to port the file to more distributions), you'll most probably upset your SUSE and Gentoo users (and more).
Ideally, you'll want to use GNU autoconf, automake, and libtool to create configure and make scripts, then distribute the library as source with the generated configure and Makefile.in files.
Here's an online book about them.
./configure; make; make install is fairly standard in Linux.
The root of the problem is that Linux runs on many different processors. You can't just rely on the processor supporting x86 instructions like Windows does (for most versions: Itanium (XP and newer) and Alpha (NT 4.0) are the exceptions).
So, the question is, how to develop shared libraries for Linux? You could take a look at this tutorial or the Pogram Library Howto.
I know what you are asking. For Windows, MSFT has carefully made the DLLs all compatible, so your DLLs are usually compatible for almost every version of Windows, that's why you call it "portable".
Unfortunately, on Linux there are too many variations (and everyone is thinking to be "different" to make money) so that you cannot get same benefits as Windows, and that's why we have lots of same packages compiled for different distributions, distro version, CPU type, ...
Some say the problem is caused by (CPU) architecture, but it is not. Even on same arch, there's still different between distributions. Once you've really tried to release a binary package, you would know how much hard it is - even C runtime library dependency is hard to maintain. Linux OS lacks too much stuff so almost every services involves dependency issue.
Usually you can only build some binary that is compatible to some distribution (or, several distributions if you are lucky). That's why releasing Linux programs in binary always screwed up, unless bound to some distro like Ubuntu, Debian, or RH.
Just putting a .so file into /usr/lib may work, but you are likely to mess up the scheme that your distro has for managing libraries.
Take a look at the linux standard base - That is the closest thing you will find to a common platform amongst linux distros.
http://www.linuxfoundation.org/collaborate/workgroups/lsb
What are you trying to accomplish?
Tinkertim's answer is spot on. I'll add that it's important to understand and plan for changes to gcc's ABI. Things have been fairly stable recently and I guess all the major distros are on gcc 4.3.2 or so. However, every few years some change to the ABI (especially the C++ related bits) seems to cause mayhem, at least for those wanting to release cross-distro binaries and for users who have gotten used to picking up packages from other distros than they actually run and finding they work. While one of these transitions is going on (all the distros upgrade at their own pace) you ideally want to release libs with ABIs supporting the full range of gcc versions in use by your users.