Any library providing common used structures and algorithms for FUSE - c++

I am going to write a file system prototype by using FUSE. Are there any (additional) well implemented libraries besides FUSE that can provide some common file system optimizing functions like dir cache, journaling, lookup table, atomic ops and etc. It should be better written in C.
By the way, I am going to implement it on OSX and Linux. That's one of the major concerns I will use FUSE but not a native file system, even the performance matters.
Thanks.

On Windows, the Dokan library has been around for a while. I've been meaning to play with it, but haven't had the chance. It is a file system driver that forwards all activity back to a user mode process, and a DLL that facilitates writing the user mode side. It is open source, and licensed in a mix of LGPL and MIT licenses.
There was an emulation (or perhaps a port) of FUSE for Windows once called WinFUSE, but my links to it are all dead now. It may be findable...
Edit:
There is an extensive List of Filesystems at Wikipedia. It doesn't appear to list many options on Linux outside of FUSE. The others that appear at first glance to be similar, are often implemented on top of FUSE.
The exception seems to be LUFS (Linux Userland File System), but work on it appears to have been abandoned in 2003 in favor of FUSE.

Related

How can you send data to another process (at the c++ language level only)

Suppose you are developing two application (A and B).
How can you send some piece of information to B from A if you are only allowed to work at the c++ language level (that is including the standard libraries and STL) ?
Now Im thinking std::ofstream and std::ifstream could be a possible solution (albeit a crude one) ? - but what pitfalls is there and can they be avoided ? (how?).
You just cannot. Standard C++17 does not know about any kind of inter-process communication and does not know much about processes (except thru std::system whose behavior is not really specified). Some operating systems don't have any processes and some of them don't have files and some of them don't have pipes.
Read more about operating systems. I strongly recommend Operating Systems: Three Easy Pieces (which is freely available).
Of course, you can read and write a file, but the synchronization between the two processes should still happen (perhaps by running one after the other, in some operating system specific way, so running A then B, and how that exactly happens is OS specific)
Read that C++17 standard (e.g. the draft here) to check.
Some C++17 implementations might not even have any notion of process. You could have a fully compliant C++17 on some embedded system without any operating system dealing with processes.
My recommendation is to be pragmatical, and use some framework like Boost, Qt, ZeroMQ, or POCO (or old Berkeley sockets) which deals with processes and inter-process communication facilities; you'll likely to find a framework supporting the several OSes you really care about (AFAIK, all of Boost, POCO, Qt know about Linux, Windows, MacOSX and offer a common API abstracting them; but you could find some academic operating system which is incompatible with them; in practice, any framework targeting both Windows and POSIX should be practically enough).
With some curiosity, you may find an OS with a good C++17 implementation which has a very weird API (look into GNU Hurd for an example).
If your IPC facility is based on byte streams, look into text-based protocols (perhaps JSONRPC, SOAP, HTTP, ...). They are easier to code and most of them come with some C++ compatible library...
And with a few months of work and a lot of know how, you might even port a recent GCC or Clang to most other operating systems: they are careful to abstract the requirements on the OS in a clever way.
Remember, you could find OSes which don't even have any file system: look into CapROS or Contiki for some recent example, and look also inside tunes.org where interesting discussions related to your topic, in the past century, have been archived. But with some pain (my guess is a few months of work for a GCC or Clang expert), you'll be able to port a recent GCC or Clang to target it to obtain a C++17 cross-compiler targetting them.
IMHO, a C++ standard library which enables only to "open" one single "file" (supposedly named THEFILE) is conforming to the letter of the C++17 standard. AFAIK, you don't have any guarantee that std::ifstream or std::ofstream works successfully.
BTW, current processors are practically multi-core, so it makes a lot of sense to try running A and B in parallel and doing some IPC (in an OS specific way, perhaps abstracted by some framework or library).

Communicate with CoDeSys program on a Linux-based WAGO PFC200 PLC

I'm currently getting familiar with PLC's, the WAGO 750-8206 PLC in particular. It offers a linux OS and can run CoDeSys programs. There are some I/O modules attached to the controller: 750-530, 750-430 and 750-600. What I would like to know is this:
Is it possible to write a C++ linux application that runs on the PLC and gets/sets the digital inputs and outputs?
Even better: can I write a CoDeSys program that "talks to the I/O's" and handles all the logic and at the same time can be accessed by a C++ linux program? THe idea is this: I would like the CoDeSys program to check for let's say two digital inputs. If both are high, a variable should be set to a defined value. The linux application should be able to read that variable and conduct further processing (such as sending JSon data to a server or similar).
Also, I would need to be able to send commands from the linux application to the CoDeSys program in order to switch digital outputs (or set values on analog outputs etc) when the linux application receives a message that triggers the command.
Any thoughts and comments on this topic are greatly appreciated as I am completely new to this topic. Thanks in advance!
The answer you might want
The actual situation has changed into the opposite of the previous answer.
WAGO's recent Board Support Packages and Documentation actively support you in making changes and extensions to the PLC200 line. Specifically the WAGO 750-8206 and 17 (as of March 2016) other PLCs :
wago.us -> Products -> Components for Automation -> Modular WAGO-I/O-SYSTEM, IP 20 (750/753 Series)
What you have to do is get in touch with them and ask for their latest Board Support Package (BSP) for the PLC200 line.
I quote from the previous answer and mark the changes, my additions are in bold.
Synopsis
Could you hack a PFC200 and get custom binaries executed? Probably Absolutely yes. As long as the program is content to run on the Linux-3.6.11 kernel and glibc-2.16 and is compiled for the "armhf" API, any existing ARM application, provided you also copy the libraries it uses as well, will just run without even compiling it specifically for the PFC200.
Would it be easy or quick? No. Yes, if you have no fear of the Linux Command line. It is as easy as using the Cross Compiler provided by the Board Support Package (BSP) with the provided C-libraries and then run this to transfer your program to the PFC's flash and run it: scp your-program root#PFC200:/usr/bin
ssh root#FC200 /usr/bin/your-programOf course, you can use Eclipse CDT with the Cross Toolchain for the PFC200 and configure Eclipse to do do remote run and debug.
Will this change in the future? Maybe. Remember that PFC200 is fairly new in North America.It has, PFC200 has appeared in September 2014
The public HOWTO Building FORTE for Wago describes how to use the initial BSP to run FORTE, which is the IEC 61499 run-time environment of 4DIAC (link: sf.net/projects/fordiac ), an open source PLC environment allowing to implement industrial control solutions in a vendor neutral way. 4DIAC implements IEC 61499 extending IEC 61131-3 with better support for controller to controller communication and dynamic reconfiguration.
In case you want to access the KBUS (which talks to the I/Os) directly, you have to know that currently only one application can be in charge of KBUS.
So either CODESYS, or FORTE, or your own KBUS application can be in charge of the KBUS.
The BSP from 2015 has many examples and demos how to use all the I/O of the PLC200 (KBUS, CAN, MODBUS, PROFIBUS as well as the Switches and LEDs on the PFC200 directly). Sources for the kernel and with all kernel drivers and the other Open Source components is provided and compiled in the Board Support Package (BSP).
But, the sources for the libraries and tools developed from scratch by WAGO and are not based on GPL/Open Source code are not provided: These include the Application Device Interface(ADI)/Device Abstraction Layer(DAL) libraries which do CANopen, PROFIBUS-Slave and KBus (which is used all PLC I/O modules connected to the main PLC unit)
While CANopen is using the standard Linux Socketcan API to talk to the kernel and you could just write a normal socketcan program using the provided libsocketcan, the KBus API is an WAGO-specific invention and there, you'd have to do some reverse-engineering if you'd not want to use WAGO's DAL for accessing all the electrical I/O of the PLC, but the DAL is documented and examples how to use it are provided in the BSP.
If you use CODESYS however, there is an "codesys_lib_demo-0.1" example library which shows how to provide a library for CODESYS to use.
Outdated Answer
This answer was very specific to circumstances in 2014 and 2015. As of 2016, it contains incorrect information. Still going to leave as-is for now to provide background.
The quick answer you probably don't want
You could very reasonably write code using Codesys that put together a JSON packet and sent it off to a server elsewhere. JSON is just text, and Codesys can manipulate text in a fashion very similar to C. And there are many ethernet protocols available from within Codesys using addon libraries provided by Wago.
Now the long Answers
First some background
Since you seem to be new to Wago and the philosophy of Codesys in general... a short history.
Codesys is used to build and deploy Hard Realtime execution environments, and it is important to understand that utilizing libraries without fully understanding the consequences can destabilize performance of the entire system (bringing Codesys to its knees and throwing watchdog errors in the program). Remember, many PLC's are controlling equipment that could kill someone if it ever crashed.
Wago is fond of using Linux to provide the preemptive RT kernel for the low level task scheduling and then configuring Codesys to utilize much of the standard C-libraries that often accompany linux. Wago has been doing this for quite some time, but they would never allow you to peel back the covers without going through Codesys (which means using IEC 61131 languages, of which C++ is not included), and this was for your own safety (and their product image). If you wanted the power of linux on a Wago, you had to get a special PLC with a completely naked OS, practically no manual or support, and forfeit the entire Codesys runtime.
The new PFC200's have much more RAM and memory available than recent models, allowing for more of the standard linux userland stack (ssh, ftp, http,...) to be included without compromising the Codesys runtime, and they advertise this. BUT... they are still keeping a lid on compilation tools and required header files needed to compile and link to Codesys libraries or access specialized hardware (the Wago KBUS, which interfaces your I/O modules).
The Synapsis
Could you hack a PFC200 and get custom binaries executed? Probably yes.
Would it be easy or quick? No.
Will this change in the future? Maybe. Remember that PFC200 is fairly new in North America.
Things you may not know
Codesys does not necessarily know or care about Wago. You can get Target Platforms for Codesys that do target Intel processors running a linux os. Codesys DOES SUPPORT accessing external libraries (communication in the reverse direction is dangerous), but they often expect a C style interface, and you can only access those libraries by defining C-headers that Codesys will analyze, so you may need to do some magic to get C++ working seemlessly. What you can do is create a segment of shared memory that both C++ and Codesys access, and that is how they pass information (synchronization is another problem).
You can get an Open Wago PLC, running Codesys on Linux. Wago's IPC are made specifically for this purpose. They have more power, memory, and communication capabilities in general; but they do cost more than double your typical Wago PLC.
If you feel like toying with the idea of hacking a Wago, you will need to tear apart the manuals for Codesys (it has its own), the manuals for the Wago IPC's, and already be familiar with linux style inter-process-communication and/or dynamic libraries.
Also, there is an older Wago PLC that had the naked Linux on it 750-8??. It also has a very good manual on how to access the Wago hardware using supplied headers.
You must first understand how Codesys expects to talk to its target operating system. Then you work backwards to make it talk to Wago specific libraries living on that operating system. You must be careful not to hijack Codesys.
Your extra C++ libs should assist Codesys, not take it over. For instance, host a sqlite database on the same device, and use C++ to manage the database and provide a very simple interface that Codesys can utilize. All Codesys would do is call a function and pass some values, but your C++ would actually build an SQL query and issue it to the database (Codesys doesn't need to know why or how this is happening).
I hope at least one paragraph is helpful in some way.

COM in the non-Windows world?

Hope this question isn't going to be too vague. Reading through the COM spec and Don Box's Essential COM book, there is plenty of talk of the "problems that COM solves" - and they all sound important, relevant and current.
So how are the problems that COM addresses dealt with on other systems (linux, unix, OSX, android)? I'm thinking of things like:
binary compatibility across compilers and compiler versions
binary component reuse
compiling an application such that it has run-time dependencies rather than load-time ones (so that it runs even when a dependency is missing)
access to library functionality from languages other than the library's own
reasonably low-overhead remote procedure calls to components loaded in the address space of a different process
etc (I'm sure the list goes on)
I'm basically just trying to understand why for instance on Linux CORBA isn't a thing like COM is a thing on Windows (if that makes any sense). Does maybe software development on Linux subscribe to a different philosophy than the component-based model proposed by COM?
And finally, is COM a C/C++ thing? Several times I've come across comments from people saying COM is made "obsolete" by .NET but without really explaining what they meant by that.
For the remainder of this post, I'm going to use Linux as an example of open-source software. Where I mention "Linux" it's mostly a short/simple way to refer to open source software in general though, not anything specific to Linux.
COM vs. .NET
COM isn't actually restricted to C and C++, and .NET doesn't actually replace COM. However, .NET does provide alternatives to COM for some situations. One common use of COM is to provide controls (ActiveX controls). .NET provides/supports its own protocol for controls that allows somebody to write a control in one .NET language, and use that control from any other .NET language--more or less the same sort of thing that COM provides outside the .NET world.
Likewise, .NET provides Windows Communication Foundation (WCF). WCF implements SOAP (Simple Object Access Protocol)--which may have started out simple, but grew into something a lot less simple at best. In any case, WCF provides many of the same kinds of capabilities as COM does. Although WCF itself is specific to .NET, it implements SOAP, and a SOAP server built using WCF can talk to one implemented without WCF (and vice versa). Since you mention overhead, it's probably worth mentioning that WCF/SOAP tend to add more overhead that COM (I've seen anywhere from nearly equal to about double the overhead, depending on the situation).
Differences in Requirements
For Linux, the first two points tend to have relatively low relevance. Most software is open source, and many users are accustomed to building from source in any case. For such users, binary compatibility/reuse is of little or no consequence (in fact, quite a few users are likely to reject all software that isn't distributed in source code form). Although binaries are commonly distributed (e.g., with apt-get, yum, etc.) they're basically just caching a binary built for a specific system. That is, on Windows you might have a single binary for use on anything from Windows XP up through Windows 10, but if you use apt-get on, say, Ubuntu 18.02, you're installing a binary built specifically for Ubuntu 18.02, not one that tries to be compatible with everything back to Ubuntu 10 (or whatever).
Being able to load and run (with reduced capabilities) when a component is missing is also most often a closed-source problem. Closed source software typically has several versions with varying capabilities to support different prices. It's convenient for the vendor to be able to build one version of the main application, and give varying levels of functionality depending on which other components are supplied/omitted.
That's primarily to support different price levels though. When the software is free, there's only one price and one version: the awesome edition.
Access to library functionality between languages again tends to be based more on source code instead of a binary interface, such as using SWIG to allow use of C or C++ source code from languages like Python and Ruby. Again, COM is basically curing a problem that arises primarily from lack of source code; when using open source software, the problem simply doesn't arise to start with.
Low-overhead RPC to code in other processes again seems to stem primarily from closed source software. When/if you want Microsoft Excel to be able to use some internal "stuff" in, say, Adobe Photoshop, you use COM to let them communicate. That adds run-time overhead and extra complexity, but when one of the pieces of code is owned by Microsoft and the other by Adobe, it's pretty much what you're stuck with.
Source Code Level Sharing
In open source software, however, if project A has some functionality that's useful in project B, what you're likely to see is (at most) a fork of project A to turn that functionality into a library, which is then linked into both the remainder of project A and into Project B, and quite possibly projects C, D, and E as well--all without imposing the overhead of COM, cross-procedure RPC, etc.
Now, don't get me wrong: I'm not trying to act as a spokesperson for open source software, nor to say that closed source is terrible and open source is always dramatically superior. What I am saying is that COM is defined primarily at a binary level, but for open source software, people tend to deal more with source code instead.
Of course SWIG is only one example among several of tools that support cross-language development at a source-code level. While SWIG is widely used, COM is different from it in one rather crucial way: with COM, you define an interface in a single, neutral language, and then generate a set of language bindings (proxies and stubs) that fit that interface. This is rather different from SWIG, where you're matching directly from one source to one target language (e.g., bindings to use a C library from Python).
Binary Communication
There are still cases where it's useful to have at least some capabilities similar to those provided by COM. These have led to open-source systems that resemble COM to a rather greater degree. For example, a number of open-source desktop environments use/implement D-bus. Where COM is mostly an RPC kind of thing, D-bus is mostly an agreed-upon way of sending messages between components.
D-bus does, however, specify things it calls objects. Its objects can have methods, to which you can send signals. Although D-bus itself defines this primarily in terms of a messaging protocol, it's fairly trivial to write proxy objects that make invoking a method on a remote object look pretty much like invoking one on a local object. The big difference is that COM has a "compiler" that can take a specification of the protocol, and automatically generate those proxies for you (and corresponding stubs in the far end to receive the message, and invoke the proper function based on the message it received). That's not part of D-bus itself, but people have written tools to take (for example) an interface specification and automatically generate proxies/stubs from that specification.
As such, although the two aren't exactly identical, there's enough similarity that D-bus can be (and often is) used for many of the same sorts of things as COM.
Systems Similar to DCOM
COM also allows you to build distributed systems using DCOM (Distributed COM). That is, a system where you invoke a method on one machine, but (at least potentially) execute that invoked method on another machine. This adds more overhead, but since (as pointed out above with respect to D-bus) RPC is basically communication with proxies/stubs attached to the ends, it's pretty easy to do the same thing in a distributed fashion. The difference in overhead, however, tends to lead to differences in how systems need to be designed to work well, though, so the practical advantage of using exactly the same system for distributed systems as local systems tends to be fairly minimal.
As such, the open source world provides tools for doing distributed RPC, but doesn't usually work hard at making them look the same as non-distributed systems. CORBA is well known, but generally viewed as large and complex, so (at least in my experience) current use is fairly minimal. Apache Thrift provides some of the same general type of capabilities, but in a rather simpler, lighter-weight fashion. In particular, where CORBA attempts to provide a complete set of tools for distributed computing (complete with everything from authentication to distributed time keeping), Thrift follows the Unix philosophy much more closely, attempting to meet exactly one need: generate proxies and stubs from an interface definition (written in a neutral language). If you want to do those CORBA-like things with Thrift you undoubtedly can, but in a more typical case of building internal infrastructure where the caller and callee trust each other, you can avoid a lot of overhead and just get on with the business at hand. Likewise, google RPC provides roughly the same sorts of capabilities as Thrift.
OS X Specific
Cocoa provides distributed objects that are fairly similar to COM. This is based on Objective-C though, and I believe it's now deprecated.
Apple also offers XPC. XPC is more about inter-process communication than RPC, so I'd consider it more directly comparable to D-bus than to COM. But, much like D-bus, it has a lot of the same basic capabilities as COM, but in different form that places more emphasis on communication, and less on making things look like local function calls (and many now prefer messaging to RPC anyway).
Summary
Open source software has enough different factors in its design that there's less demand for something providing the same mix of capabilities as Microsoft's COM provides on Windows. COM is largely a single tool that tries to meet all needs. In the open-source world, there's less drive to provide that single, all-encompassing solution, and more tendency toward a kit of tools, each doing one thing well, that can be put together into a solution for a specific need.
Being more commercially oriented, Apple OS X probably has what are (at least arguably) closer analogs to COM than most of the more purely open-source world.
A quick answer on the last question: COM is far from being obsolete. Almost everything in the Microsoft world is COM-based, including the .NET engine (the CLR), and including the new Windows 8.x's Windows Runtime.
Here is what Microsoft says about .NET in it latest C++ pages Welcome Back to C++ (Modern C++):
C++ is experiencing a renaissance because power is king again.
Languages like Java and C# are good when programmer productivity is
important, but they show their limitations when power and performance
are paramount. For high efficiency and power, especially on devices
that have limited hardware, nothing beats modern C++.
PS: which is a bit of a shock for a developer who has invested more than 10 years on .NET :-)
In the Linux world, it is more common to develop components that are statically linked, or which run in separate processes and communicate by piping text (maybe JSON or XML) back and forth.
Some of this is due to tradition. UNIX developers have been doing stuff like this long before CORBA or COM existed. It's "the UNIX way".
As Jerry Coffin says in his answer, when you have the source code for everything, binary interfaces are not as important, and in fact just make everything more difficult.
COM was invented back when personal computers were a lot slower than they are today. In those days, loading components into your app's process space and invoking native code was often necessary to achieve reasonable performance. Now, parsing text and running interpreted scripts aren't things to be afraid of.
CORBA never really caught on in the open-source world because the initial implementations were proprietary and expensive, and by the time high-quality free implementations were available, the spec was so complicated that nobody wanted to use it if they weren't required to do so.
To a large extent, the problems solved by COM are simply ignored on Linux.
It is true that binary compatibility is less important when you have the source code available. However, you still have to worry about modularisation and versioning. If two different programs depend on different versions of the same library, you need to somehow support that.
Then there is the case of the same program using different versions of the same library. This is often useful when working on large legacy programs, where upgrading everything can be prohibitively expensive but you would like to use new features anyway. With COM, the old parts of the program can just be left alone, since new library versions can more easily be made backwards compatible.
In addition, having to compile from source instead of binary compatibility is a huge hassle. Especially if you are actually developing software, since binary incompatibility means you have to recompile much more often. If one tiny part changes in a large C++ program, you may have to wait for a 30 minute recompile. If the different pieces are compatible, only the part which changed has to be recompiled.
COM and DCOM in particular have been around in windows for some considerable time now and naturally windows developers have made use of this powerful framework.
We are now in the cross platform age and when porting such applications to other platforms we are faced with challenges which in many cases can be mitigated or eliminated altogether unless the application we are porting is more than just one simple standalone app.
If your dealing with a whole suite of modules running on different machines all communicating using windows specific technologies such as DCE/RPC, DCOM or even windows named pipes then your job just became an order of magnitude harder.
DCE/RPC DCOM and windows named pipes all are very windows specific, non portable and of course subject to windows security access control.
For instance anyone familar with OPC DA (an industrial automation protocol based on DCOM still very much in use but now superceded by OPC UA (which avoids DCOM))) will know that there are no elegant solutions here if the client (or server) needs to be available for Linux!!
Sure there appear to be some technical hurdles here given that the MS code is not in the public domain but projects such as Wine have a partly ok DCE/RPC implementation and MS do publish some of the protocol docs. Try searching and you will probably find little information and few products open source or otherwise to help you.
Perhaps the lack of open source or affordable options here is more due to legal concerns - I wonder!
Some simpler solutions simply involve installing a "gateway service" on the windows machines to allow an alternative means of access to DCOM interfaces on that machine. This is fine if the windows machine does not belong to an unwilling 3rd party which unfortunately is sometimes the case!!! I know we'll just chuck another Windows machine as the gateway in the middle is the usual global warming enhancing solution to that problem.
I would conclude that Linux to Windows DCOM interoperability is certainly not impossible but it does appear to be a topic that few are interested in talking about unless you get your wallet out!

Does C++ deprecate some parts of the Linux API?

I am in the middle of reading The Linux Programming Interface and Linux programming by examples. Both are very good books and explain Linux API very well. But quite often I find myself thinking that in real world projects I would prefer C++ standard library, Boost or some other good C++ library (there are many well written and portable C++ libs) over C API whenever possible. This naturally bags a question - why do I need to use Linux API directly when good C++ compiler and libs (Boost, TBB and etc) are available on target platforms? I guess the same could be said about Windows API too, but I don't know much about Windows system programing.
This is not new to C++. In C, there have been two ways to open files for a long, long time:
// Only on POSIX
int fdes = open("file.txt", O_RDONLY);
Or:
// Any hosted C environment, POSIX or otherwise
FILE *fp = fopen("file.txt", "rb");
So why would anyone ever use the POSIX-specific version? The answer is simple -- there are a large number of system calls which work with POSIX file descriptors. For example, select. You can also make things other than files, like pipes and sockets, and you can pass them to other processes. There is a long tradition of using POSIX file descriptors, and we have a large number of books and references on how to do network programming with them.
So the trade-off is between the portable version and the powerful version. It always has been.
The other half of this is that time you work with files on Linux you are working with the POSIX interface. Libraries just hide it from you. Boost uses it, the C runtime uses it, the JRE uses it, and GHC uses it. Many (most?) language runtimes are written in C, and direct access to the system calls is preferred.
You should use higher level API whenever possible. It's usually faster to work with and makes it easier to port your code to another platform. However:
Due to the law of leaky abstractions it's useful to know the underyling operating system API so you understand various quirks and performance issues that the higher level API was not able to hide.
Some things are not doable with the portable API, usually because it's so different between operating systems that it's not easily possible.
All portable APIs incur some overhead. In a big project it's small compared to the rest of the code, but if you are doing something small, you might want to avoid that overhead, especially if you know you'll need to use the specific API somewhere anyway.
C++ standard is not published for a particular platform. It is platform independent, So If you are going to use some platform features/functionality you will have to use platform dependent feature/functionality usually called system api. So in that sense No the C++ library does not deprecate linux/windows api.

How do I read system information in C++?

I'm trying to get information like OS version, hard disk space, disk space available, and installed RAM on a Linux system in C++. I know I can use system() to run different Linux commands and capture their output (which is what I'm currently doing) but I was wondering if there's a better way? Is there something in the C++ standard library that I can use to get information from the operating system?
If you are using *nix commands via system.
Then do man scroll to the bottom of the man page and it will usually show you what relevant C system calls are related.
Example: man uname:
SEE ALSO
uname(2), getdomainname(2), gethostname(2)
Explanation of numbers:
(1): User UNIX Command
(2): Unix and C system calls
(3): C Library routines
(4): Special file names
(5): File formats
(6):
(7):
(8): System admin commands
So if you are using system("uname"). From the man page you can see that there is also a uname C system call (uname(2)). So you can now do a 'man 2 uname' to get information about how to use the C system call uname.
There is nothing in the C++ Standard library for these purposes. The library you could use is libhal, which abstracts the view of programs to the hardware, collecting various informations from /proc, /sys and others. HAL, scroll down, there seems to be an unofficial C++ binding available too (haven't tested it though, while libhal works also fine for C++ programs). Use the command lshal to display all device informations available to HAL.
If you don't want to use HAL as litb suggests, you can read things straight out of the /proc filesystem, provided it's there on your system. This isn't the most platform-independent way of doing things, and in many cases you'll need to do a little parsing to pick apart the files.
I think HAL abstracts a lot of these details for you, but just know that you can read it straight from /proc if using a library isn't an option.
System information is by definition not portable, so there is no standard solution. Your best bet is using a library that does most of the work for you. One such cross platform library (unlike hal, which is currently Linux specific) is SIGAR API, which is open source BTW. I've used it in a C++ project without much trouble (the installation is a bit non-standard but can be figured out easily)