How can I know what built-in functions does the DTrace provide? - dtrace

As we know, the DTrace running on different OSs provides different built-in functions. For example, the older versions of Solaris do not have inet_ntop() available in DTrace.
So when I write a DTrace script running on a special OS, how can I know what built-in functions the DTrace provide in advance? Or only can through running the script and checking the DTrace's complaint?

The best solution is to refer to the DTrace documentation for the version of the OS in question. For Solaris, new DTrace features almost always appear only in major release or updates and the documentation is (or should be) updated at the same time. Thus, if you're running Solaris 11.1 then you should consult the "Oracle Solaris 11.1 Dynamic Tracing Guide".
Solaris's dtrace(1) has no "show me the currently supported actions" option but you could consider logging an RFE.
If you write a script that requires a specific version of the DTrace implementation then you can bind to it with an option or pragma. This mechanism should exist in other DTrace implementations but the meaning of any particular version number will be different for each fork. Thus, as always, it's best to rely on the documentation.

If you wish to get the list of functions exposed by the various Dtrace modules, you can use dtrace -l

Related

How to compile C ++ code from Windows for Linux (Dev-Cpp)

I use the Dev-Cpp program with the MinGW compiler that allows you to compile C / C ++ code to obtain a Windows launcher, but is there a compiler for Windows that allows you to create executables for Linux?
You can install Windows Subsystem for Linux, or set up a VM and do it that way.
Or as #user4581301 mentioned, use a cross-compiler.
http://metamod-p.sourceforge.net/cross-compiling.on.windows.for.linux.html
Ignoring the fact that Dev-C++ has been obsolete for nearly a decade (I may have an unpopular opinion however that you should use whatever tools you can to learn whatever you can, even if that means using 'obsolete software' [as long as it's purely for learning and not production use])...
You have a couple options, one of which has been mentioned by somebody. 1.) Use a cross-compiler, and 2.) (which I personally would recommend, if it is viable for your particular needs) simply compile on actual Linux.
To do this, you just need a working distribution of Linux with a development environment. You can use a virtual machine, Windows Subsystem for Linux (WSL), or a physical machine with Linux running on it.
From there, if you want the code to compile for multiple operating systems, you'll have to make sure your libraries and frameworks and other OS-specific code (e.g., filesystem paths, system calls) are properly handled, or just use cross-platform libraries. If you're dealing with standard C/C++, then this won't be of any concern.
Since Dev-C++ uses MinGW (the Windows port of GCC), then the actual compilation process should be the same, although on Linux IDEs are not commonly used, so you may have to get your hands dirty with shell commands, but that's not too hard once you get started. Good luck!

C compiler with windows

I am working with GCC compiler in Netbeans IDE but some thing I could not understand it , even I could not get answer in google search
Question :
why do we need to use Cygwin tool when working on GCC compiler in case our platform is Windows while we do not need this tool for Linux platform ?
TL;DR
You don't need Cygwin to write C++ programs.
Details
All you really need to write programs in C++ is a tool chain that supports your target environment and a text editor.
Cygwin is a compatibility layer that brings a higher level of POSIX compatibility than Windows systems normally provide to Windows.
Linux is an operating system that already supports POSIX, so no POSIX compatibility layer is required. Instead you may find yourself using tools like wine to run Windows programs.
You do not need a Cygwin to use C++. You only need Cygwin if you want to build and run programs that have been written assuming POSIX compliance on a Windows-based system. If you write a program for Linux and it uses Linux system calls, odds are you will need Cygwin to compile and run it on Windows without replacing the system calls with their Windows equivalents.1 Ditto if you are writing on Windows and intend to use the same code on Linux or any other POSIX compliant OS.
You can use other libraries, Boost being a common option, to provide cross-platform compatibility. If you are feeling adventurous or have a limited subset of non-portable system calls, you can also write your own layer to sit between your code and the target system
1Linux has its own calls in addition to POSIX support, so don't assume that you can always do this.
You don't.
There are various breeds of the GNU C language compiler.
One is Cygwin.
Another is MinGW. This compiler doesn't require Cygwin.
There are others.
The compiler needs to access operating system features. Each breed accesses different OS facilities.

What are the differences between C/C++ bare-metal compilation and compilation for a specific OS (Linux)?

Suppose you have a cross-compilation tool-chain that produces binaries for the ARM architecture.
Your tool-chain is like this (running on a X86_64 machine with Linux):
arm-linux-gnueabi-gcc.exe : for cross-compilation for Linux, running on ARM.
arm-gcc.exe : for bare-metal cross-compilation targeting ARM.
... and the plethora of other tools for cross-compilation on ARM.
Points that I'm interested in are:
(E)ABI differences between binaries (if any)
limitations in case of bare-metal (like dynamic memory allocations, usage of static constructors in case of C++, threading models, etc)
binary-level differences between the 2 cases in terms of information specific to each of them (like debug info support, etc);
ABI differences is up to how you invoke the compiler, for example GCC has -mabi and that can be one of ‘apcs-gnu’, ‘atpcs’, ‘aapcs’, ‘aapcs-linux’ and ‘iwmmxt’.
On bare-metal limitations for various runtime features exists because someone hasn't provided them. Be them initializing zero allocated areas or providing C++ features. If you can supply them, they will work.
Binary level differences is also up to how you invoke compiler.
You can check GCC ARM options online.
I recently started a little project to use a Linux standard C library in a bare-metal environment. I've been describing it on my blog: http://ellcc.org/blog/?page_id=289
Basically what I've done is set up a way to handle Linux system calls so that by implementing simplified versions of certain system calls I can use functions from the standard library. For example, the current state for the ARM implements simplified versions of read(), readv(), write(), writev() and brk(). This allows me to use printf(), fgets(), and malloc() unchanged.
I'm my case, I use the same compiler for targeting Linux and bare-metal. Since it is clang/LLVM based, I can also use the same compiler to target other processors. I'm working on a bare-metal example for the Mips right now.
So I guess the answer is that there doesn't have to be any difference.

Desktop Development Environment that Compiles to Linux, Mac OS, and Windows

is there any development environments that allow you to have one code base that can compile to Linux, Mac OS, and Windows versions without much tweaking? I know this is like asking for where the Holy Grail is burred, but maybe such a thing exists. Thanks.
This is achieved through a number of mechanisms, the most prominent being build systems and specific versions of code for certain systems. What you do is write your code such that, if it requires an operating system API, it calls a specific function. By example, I might use MyThreadFunction(). Now, when I build under Linux I get a linux specific version of this MyThreadFunction() that calls pthread_create() whereas the windows version calls CreateThread(). The appropriate includes are also included in these specific platform-files.
The other thing to do is to use libraries that provide consistent interfaces across platforms. wxWidgets is one such platform for writing desktop apps, as is Qt and GTK+ for that matter. Any libraries you use it is worth trying to find a cross-platform implementation. Someone will undoubtedly mention Boost at some point here. The other system I know if is the Apache Portable Runtime (APR) that provides a whole array of things to allow httpd to run on Windows/Linux/Mac.
This way, your core code-base is platform-agnostic - your build system includes the system specific bits and your code base links them together.
Now, can you do this from one desktop? I know you can compile for Windows from Linux and so probably for Mac OS X from Linux, but I doubt if you can do it from Windows-Linux. In any case, you need to test what you've built on each platform so my advice would be to run virtual machines (see vmware/virtualbox).
Finally, editors/environments: use whatever suits you. I use either Eclipse/GVim on Linux and Visual Studio on Windows - VS is my "Windows build system".
Maybe something like CodeBlocks?
Qt is a good library/API/framework for doing this in C++, and Qt Creator is a very pleasant IDE for it.
I've heard this is possible. Your compiler would need to support this. The only one that I know that does is GCC but it obviously requires a special configuration. I, however, have never used this feature. I've only seen that it exists.
What you are looking for is called "Cross Compiling"

How do I get hardware information on Linux/Unix?

How I can get hardware information from a Linux / Unix machine.
Is there a set of APIs?
I am trying to get information like:
OS name.
OS version.
available network adapters.
information on network adapters.
all the installed software.
I am looking for an application which collects this information and show it in a nice format.
I have used something similar with the "system_profile" command line tool for Mac OS X. I
was wondering if something similar is available for Linux as well.
If you need a simple answer, use:
cat /proc/cpuinfo
cat /proc/meminfo
lspci
lsusb
and harvest any info you need from the output of these commands. (Note: the cut command may be your friend here if you are writing a shell script.)
Should you need more detail, add a -v switch to get verbose output from the lspci and lsusb commands.
If what you are looking for is a more feature-complete API, then use HAL, though that may be an overkill for what you are trying to build.
If you are looking for a tool that show System Information, the GUI tool like HardInfo would useful for you.
In Ubuntu, you can install HardInfo like this...
sudo apt-get install hardinfo
Cheers
There is a bash command lshw - list hardware
I would use hal, the hardware abstraction layer. It includes both some GUI commands, some tty commands (which can be used from shell programs), and library bindings for c and multiple other languages.
HAL is not really a standard part of "linux", but I think it is used by most modern distros.
Try sudo lshw.
It's the easiest.
Since you mentioned API, try the exec family of commands for C. You can use them to execute these binaries that other people have mentioned. To create a robust/flexible solution you will probably also have to leverage the Unix fork() commands. You will also have to develop a mechanism for capturing the output spewed by these utilities. Look into Unix pipes for that.
You can use inxi which provide all hardware information including cpu temperature and so on.
Install on Red Hat based OS
sudo dnf install inixi
Install on Debian based OS
apt-get install inxi