How can I detect the presence of Intel Quick Sync from c++ code? - c++

I want to detect whether Intel Quick Sync is present and enabled on the processor. FYI, it may be disabled (powered down) if you have no video cable plugged into the motherboard, or can be disabled in the BIOS.
Thanks
Ron

There is no general solution for something like this, the code would have to be specific to the OS you are running. It would likely boil down to a system call to determine the feature set of the processor you are running i.e. maybe something like cat /proc/cpuinfo if you were running Linux for example.
There are multiple ways to execute a system call in C++. Take a look at some of the previous answers here How to execute a command and get output of command within C++ using POSIX?

You can port the Intel System Analyzer Utility for Linux to C++ from Python. That's what I ended up doing.
That tool uses the output of cat /proc/cpuinfo and lspci to collect some info about the CPU, GPU, and software installation.

Related

How can get I a hardware ID in Qt

I can't get the CPU ID or motherboard serial number in all operating system (cross platform Qt)
On Windows I'm using WMI and on Mac another. I want use a cross-platform library.
Although Qt detects at runtime the CPU feature set (cf. src/corelib/tools/qsimd.cpp), it does not export any function to access that (nor any other CPUID information). Write your small piece of assembly code to gather that information. - source
You will have to write some platform dependent code to retrieve this information.
For the CPU id you should probably look into the function __cpuid() for Windows and this answer can help you out on getting it for Linux.
Doing this you might want to consider reading up on motherboard serial numbers as not all of them provide this information at the same place (most do not provide it at all).
You can execute this command :
"wmic cpu get ProcessorId"
Qt 5.11 introduced this function: QSysInfo::machineUniqueId

How to use CAN-Bus on an Intel Atom Q7 module with EG20T chipset on Linux?

I want to use the CAN-Bus interface on an Intel Q7 module with the EG20T chipset. I got it to work on Windows but now i have to get it to work on Linux but I barely find any information.
I just need to know how I can read, write messages, start, stop and to set the baud-rate of the CAN-Bus.
So far I found this: http://cateee.net/lkddb/web-lkddb/PCH_CAN.html
and some comments about can4linux and socketCan for shell usage.
But I actually need to know how to use it within a C or C++ programm.
Looks like that driver is a SocketCan driver. Just compile and load the module, and then your device will look like a network interface.
http://www.brownhat.org/docs/socketcan/llcf-api.html
This link has information about how to send messages and such.
Good luck!
Look here for more information about socketcan and linux implementation:
socketcan
Modern Linux distributions provide SocketCAN drivers from stock. So there is no need to compile the driver yourself.
SocketCAN project provides utils for sending/receiving CAN frames and other related tasks. Please see this repository: https://github.com/linux-can/can-utils
There is also a central SocketCAN dedicated wiki: http://elinux.org/CAN_Bus

Is there an API call to disable the network for Linux?

I'm maintaining code on a real-time system running Red Had Enterprise Linux. I'm afraid that sometimes, despite running things at the highest priority, the network manages to slow the computer down, and I would like to disable the card so that I have the full power of the CPU at my disposal. I need some files at the beginning of my function from the network, but after a certain point, I can effectively disable the network until the program is complete. Is there a way I could disable the network through some sort of an API call?
Hopefully someone could expand on this, but it may be worthwhile to look into how ifconfig disables network devices. You probably can do an ioctl on the interface to disable it. Depending on the driver / nic, the kernel may then be able to cause the network hardware to drop packets instead of the cpu.
The source code for ifconfig is here:
http://net-tools.git.sourceforge.net/git/gitweb.cgi?p=net-tools/net-tools;a=blob;f=ifconfig.c;h=be6999578bd81e91e90e26a35fad91f4928f4226;hb=HEAD
Iproute2, which also is able to do the same things as ifconfig is described here:
http://git.kernel.org/?p=linux/kernel/git/shemminger/iproute2.git;a=blob;f=ip/iplink.c;h=6b051b65faab72ea46534ad33f71b3f6cd35c11b;hb=HEAD#l589
I found the source code of iproute2 slightly easier to understand than ifconfig, but it should be relatively easy to see how they interface with the networking stack to disable an interface.
The way the system does it is by issuing ip link set dev ${DEVICE} down 2> null . Looking at the code of ip.c , e.g. here you can check for yourself how it is done. The key is the netif.h that shuts down the network by calling net_if_set_down() . I think it just sets a flag.
netif->flags &= ~NETIF_FLAG_UP
You can go on from here on your own, but keep in mind netif.h is part of the kernel...
The networking is part of the kernel, rather than a user space process. You could do the equivalent of calling rmmod to remove the network driver module from the kernel, assuming you have module support enabled; or you could just set all the netfilter rules to DROP and see if that speeds things up. I'd probably prefer to do that in a launcher script (using the ready-made iptables-save/iptables-restore) rather than coding that in C++. Or you could even just bring the interfaces down.
You said "API" but did not mention the language of your program/script, nor your OS.
On Linux variants like CentOS, RHEL, SuSE, and Fedora, from a bash/Bourne shell script:
/sbin/service network stop
rc=$?
if [ "$rc" -ne 0 ]; then
echo "***** Failed to stop networking service"
exit $rc
fi
# Do your thing
/sbin/service network start

How to profile memory usage and performances of an openMPI program in C

I'm looking for a way to profile my openMPI program in C, i'm using openMPI 1.3 with Linux Ubuntu 9.10 and my programs are run under a Intel Duo T1600.
what I want in profile is cache-misses, memory usage and execution time in any part of the program.
thanks for reply
For Linux I recommend Zoom for this kind of profiling. You can get a free 30 day evaluation in order to try it out.
I finally found graphical tools for mpi profilling
vampir : www.vampir.eu and
paraprof at http://www.cs.uoregon.edu/research/tau/docs/paraprof/index.html
enjoy
Have a look at gprof and at Intel's VTune. Valgrind with the cachegrind tool could be useful, too.
Allinea MAP is ideal for this. It will highlight poor cache performance, memory usage and execution time right down to the source lines in your code. There is no need to recompile or instrument the application in order to profile it with Allinea MAP - which makes it unusually easy to get started with. On most HPC systems and with most MPIs it takes your binary, runs it, and loads up the source code automatically to display the recorded performance data.
Take a look to profiling MPI. Some tools for profiling is mpiP and pgprof.

cuda program on VMware

i wrote a cuda program and i am testing it on ubuntu as a virtual machine. the reason for this is i have windows 7, i don't want to install ubuntu as a secondary operating system, and i need to use a linux operating system for testing.
my question is: will the virtual machine limit the gpu resources? So will my cuda code be faster if i run it under my primary operating system than running it on a virtual machine?
I faced a similar task once. What I ended up doing was installing Ubuntu on a 8GB thumb drive with persistent mode enabled.
That gave me 4GB to install CUDA and everything else I needed.
Having a bootable USB stick around can be very useful. I recommend reading this.
Also, this link has some very interesting material if you're looking for other distros.
Unfortunately the virtual machine simulates a graphics device and as such you won't have access to the real GPU. This is because of the way the virtualisation handles multiple VMs accessing the same device - it provides a layer in between to share the real device.
It is possible to get true access to the hardware, but only if you have the right combination of software and hardware, see the SLI Multi-OS site for details.
So you're probably out of luck with the virtualisation route - if you really can't run your app in Windows then you're limited to the following:
Unrealistic: Install Linux instead
Unrealistic: Install Linux alongside (not an option)
Boot into a live CD, you could prepare a disk image with CUDA and mount the image each time
Setup (or beg/borrow) a separate box with Linux and access it remotely
I just heard a talk at NVIDIA's GPU technology conference by a researcher named Xiaohui Cui (Oak Ridge National Laboratory). Among other things, he described accessing GPUs from Virtual machines using something called gVirtuS. He did not create gVirtuS, but described it as an opensource "virtual cuda" driver. See following link:
http://osl.uniparthenope.it/projects/gvirtus/
I have not tried gVirtuS, but sounds like it might do what you want.
As of CUDA 3.1 it's virtualization capabilities are not vivid, so the only usable approach is to run CUDA programs directly on the target HW+SW
Use rCUDA to add a virtual GPU to your VM.