Cron to fire a compiled gfortran file - fortran

I am attempting to use cron to fire a compiled fortran file: a.out
When fired successfully, the a.out file will generate a fort.11 file.
I can successfully fire a.out manually, so gfortran is confirmed installed and working correctly.
Yet when attempting to fire a.out using a cron scheduled task - all that is generated is an empty cron.log file.
At least I can see that the scheduled task is firing, but I cannot get the compiled fortran file to run.
I am using a mac.
My crontab script:
42 13 * * * /bin/sh /Applications/MAMP/htdocs/dev/WeatherForecaster/_fortran/hour36/a.out >> /Applications/MAMP/htdocs/dev/Weather-Forecaster/_fortran/hour36/cron.log
So my question is: what may I be doing wrong that is causing my cron job to not generate output?
Thanks for your help.

Your problem is that you are calling your program as a shell script in your crontab. Your crontab should be:
31 10 * * * /Applications/MAMP/htdocs/dev/Weather-Forecaster/_fortran/hour36/a.out
Also note that you only need gfortran installed and working to produce the executable, once you have an executable, running no longer depends on the compiler, but may depend on a shared libraries. Thus the ability to run an executable doesn't really confirm that gfortran is installed properly, it was the compiling of the program that verified that.

Here is where I found my answer:
How to set up a cron job to run an executable every hour?
What my successful cron task looks like:
31 14 * * * cd /Applications/MAMP/htdocs/dev/Weather-Forecaster/_fortran/hour36/ && ./a.out

Related

Xcode 8.3: Run target with input from file

I wrote a simple C++ program that requires some input to run. In the terminal I simply run in ./myProgram < fileWithData.txt. However I could not figure out how to specify and input file for the target executed in Xcode. I used the command line project. Of course I could use a different target, for example run Terminal.app and then pass it the executable with the input file but then I can no longer debug it.
This question: Cannot get lldb to read file input explains how to set the input path in lldb, but I could not find a way to specify lldb commands that are executed before the process is started.
I don't think there's a way to do this entirely from within Xcode. However if you set the Run Scheme in Xcode to the launch mode "Wait for executable to be launched," hit run, and then run your program from Terminal.app with the appropriate piping, the Xcode-embedded lldb will connect to it.

Problems with using gperftools on Mac OS X

I have found several conflicting answers over this topic. This blog post requires libuwind, but that doesn't work on Mac OS X. I included #include <google/profiler.h> in my code, however my compiler (g++) could not find the library. I installed gperftools via homebrew. In addition, I found this stackoverflow question showing this:
Then I ran pprof to generate the output:
[hidden ~]$ pprof --text ./a.out cpu.profile
Using local file ./a.out.
Using local file cpu.profile.
Removing __sigtramp from all stack traces.
Total: 282 samples
107 37.9% 37.9% 107 37.9% 0x000000010d72229e
16 5.7% 43.6% 16 5.7% 0x000000010d721a5f
12 4.3% 47.9% 12 4.3% 0x000000010d721de8
...
Running that command (without any of the prior steps) gets me this:
[hidden]$ pprof --text ./a.out cpu.profile
Using remote profile at ./a.out.
Failed to get the number of symbols from http://cpu.profile/pprof/symbol
Why does it try to access an internet site on my machine and a local file on his/hers?
Attempting to link lib profiler as a dry run with g++ gets me:
[hidden]$ g++ -l libprofiler
ld: library not found for -llibprofiler
clang: error: linker command failed with exit code 1 (use -v to see invocation)
I have looked at the man pages, the help option text, the official online guide, blog posts, and many other sources.
I am so confused right now. Can someone help me use gperftools?
The result of my conversation with #osgx was this script. I tried to clean it up a bit. It likely contains quite a few unnecessary options too.
The blog post https://dudefrommangalore.wordpress.com/2012/02/09/profiling-c-code-using-google-performance-tools/ "Profiling C++ code using Google Performance Tools" 2012 by dudefrommangalore missed the essential step.
You should link your program (which you want to be profiled) with cpu profiler library of gperftools library.
Check official manual: http://goog-perftools.sourceforge.net/doc/cpu_profiler.html, part "Linking in the Library"
add -lprofiler to the link-time step for your executable. (It's also probably possible to add in the profiler at run-time using LD_PRELOAD, but this isn't necessarily recommended.)
Second step is to collect the profile, run the code with profiling enabled. In linux world it was done by setting controlling environment variable CPUPROFILE before running:
CPUPROFILE=name_of_profile ./program_to_be_profiled
Third step is to use pprof (google-pprof in ubuntu world). Check that there is not-empty name_of_profile profile file generated; it there is no such file, pprof will try to do remote profile fetch (you see output of such try).
pprof ./program_to_be_profiled name_of_profile
First you need to run your program with profiling enabled.
This is usually first linking your program with libprofiler and then running it with CPUPROFILE=cpu.profile.
I.e.
$ CPUPROFILE=cpu.profile my_program
I think that later step is what you have been missing.
The program will create this cpu.profile file when it exits. And then you can use pprof (preferably from github.com/google/pprof) on it to visualize/analyze.

running parallel code on PC

I have fortran code that has been parallelized with OpenMP. I want to test my code on my PC before running on HPC. My PC has double core CPU and I work on Linux-mint. I installed gfortranmultilib and this is my script:
#!/bin/bash
### Job name
#PBS -N pme
### Keep Output and Error
#PBS -j eo
### Specify the number of nodes and thread (ppn) for your job.
#PBS -l nodes=1:ppn=2
### Switch to the working directory;
cd $PBS_O_WORKDIR
### Run:
OMP_NUM_THREADS=$PBS_NUM_PPN
export OMP_NUM_THREADS
ulimit -s unlimited
./a.out
echo 'done'
What should I do more to run my code?
OK, I changed script as suggested in answers:
#!/bin/bash
### Switch to the working directory;
cd Desktop/test
### Run:
OMP_NUM_THREADS=2
export OMP_NUM_THREADS
ulimit -s unlimited
./a.out
echo 'done'
my code and its executable file are in folder test on Desktop, so:
cd Desktop/test
is this correct?
then I compile my simple code:
implicit none
!$OMP PARALLEL
write(6,*)'hi'
!$OMP END PARALLEL
end
by command:
gfortran -fopenmp test.f
and then run by:
./a.out
but only one "hi" is printed as output. What should I do?
(and a question about this site: in situation like this I should edit my post or just add a comment?)
You don't need and probably don't want to use the script on your PC. Not even to learn how to use such a script, because these scripts are too much connected to the specifics of each supercomputer.
I use several supercomputers/clusters and I cannot just reuse the script from one at the other, because they are so much different.
On your PC you should just do:
optional, it is probably the default
export OMP_NUM_THREADS=2
to set the number of OpenMP threads to 2. Adjust if you need some other number.
cd to the working directory
cd my_working_directory
Your working directory is the directory where you have the required data or where the executable resides. In your case it seems to be the directory where a.out is.
run the damn thing
ulimit -s unlimited
./a.out
That's it.
You can also store the standard output and error output to a file
./out > out.txt 2> err.txt
to mimic the supercomputer behaviour.
The PBS variables are only set when you run the script using qsub. You probably don't have that on your PC and you probably don't want to have it either.
$PBS_O_WORKDIR is the directory where you run the qsub command, unless you set it differently by other means.
$PBS_NUM_PPN is the number you indicated in #PBS -l nodes=1:ppn=2. The queue system reads that and sets this variable for you.
The script you posted is for Portable Batch System (https://en.wikipedia.org/wiki/Portable_Batch_System) queue system. That means, that the job you want to run on the HPC infrastructure has to go first into the queue system and when the resources are available the job will run on the system.
Some of the commands (those starting with #PBS) are specific commands for this queue system. Among these commands, some allow the user to indicate the application process hierarchy (i.e. number of processes and threads). Also, keep in mind that since all the PBS commands start by # they are ignored by regular shell script execution. In the case you presented, that is given by
### Specify the number of nodes and thread (ppn) for your job.
#PBS -l nodes=1:ppn=2
which as the comment indicates it should tell the queue system that you want to run 1 process and each process will have 2 threads. The queue system is likely to pass these parameters to the process launcher (srun/mpirun/aprun/... for MPI apps in addition to OMP_NUM_THREADS for OpenMP apps).
If you want to run this job on a computer that does not have PBS queue, you should be aware at least of two things.
1) The following command
### Switch to the working directory;
cd $PBS_O_WORKDIR
will be translated into "cd" because the environment variable PBS_O_WORKDIR is only defined within the PBS job context. So, you should change this command (or execute another cd command just before the execution) in order to fix where you want to run the job.
2) Similarly for PBS_NUM_PPN environment variable,
OMP_NUM_THREADS=$PBS_NUM_PPN
export OMP_NUM_THREADS
this variable won't be defined if you don't run this within a PBS job context, so you should set OMP_NUM_THREADS to the value you want (2, according to your question) manually.
If you want your linux box environment to be like an HPC login node. You can do the following
Make sure that your compiler supports OpenMP, test a simple hello world program with OpenMP flags
Install OpenMPI on your system from your favourite package manager or download the source/binary from the website (OpenMPI Download)
I would not recommend installing cluster manager like Slurm for your experiments
After you are done, you can execute your MPI programs through the mpirun wrapper
mpirun -n <no_of_cores> <executable>
EDIT:
This is assuming that you are running this only MPI. Note that OpenMP utilizes the cores as well. If you are running MPI+OpenMP - n*OMP_NUM_THREADS=cores on a single node.

Need GLIBC debug information from rpmbuild of updated source

I'm working on RHEL WS 4.5.
I've obtained the glibc source rpm matching this system, opened it to get its contents using rpm2cpio.
Working in that tree, I've created a patch to mtrace.c (i want to add more stack backtrace levels) and incorporated it in the spec file and created a new set of RPMs including the debuginfo rpms.
I installed all of these on a test vm (created from the same RH base image) and can confirm that my changes are included.
But with more complex executions, I crash in mtrace.c ... but gdb can't find the debug information so I don't get line number info and I can't actually debug the failure.
Based on dates, I think I can confirm that the debug information is installed on the test system in /usr/src/debug/glibc-2.3.6/
I tried
sharedlibrary libc*
in gdb and it tells me the symbols are already loaded.
My test includes a locally built python and full symbols are found for python.
My sense is that perhaps glibc isn't being built under rpmbuild with debug enabled. I've reviewed the glibc.spec file and even built with
_enable_debug_packages
defined as 1 which looked like it might influence the result. My review of the configure scripts invoked during the rpmbuild build step didn't give me any hints.
Hmmmm .. just found /usr/lib/debug/lib/libc-2.3.4.so.debug
and /usr/lib/debug/lib/tls/i486/libc-2.3.4.so.debug
but both of these are reported as stripped by the file command.
It appears that you are installing non-matching RPMs:
/usr/src/debug/glibc-2.3.6
just found /usr/lib/debug/lib/libc-2.3.4.so.debug
There are not for the same version; there is no way they came from the same -debuginfo RPM.
both of these are reported as stripped by the file command.
These should not show as stripped. Either they were not built correctly, or your strip is busted.
Also note that you don't actually have to get all of this working to debug your problem. In the RPMBUILD directory, you should be able to find the glibc build directory, with full-debug libc.so.6. Just copy that library into your VM, and you wouldn't have to worry about the debuginfo RPM.
Try verifying that debug info for mtrace.c is indeed present. First see if the separate debug info for GLIBC knows about a compilation unit called mtrace.c:
$ eu-readelf -w /usr/lib/debug/lib64/libc-2.15.so.debug > t
$ grep mtrace t
name (strp) "mtrace.c"
name (strp) "mtrace"
1 0 0 0 mtrace.c
[10480] "mtrace.c"
[104bb] "mtrace"
[5052] symbol: mtrace, CUs: 446
Then see if GDB actually finds the source file from the glibc-debuginfo RPM:
(gdb) set pagination off
(gdb) start # pause your test program right after main()
(gdb) set logging on
Copying output to gdb.txt.
(gdb) info sources
Quit GDB then grep for mtrace in gdb.txt and you should find something like /usr/src/debug/glibc-2.15-a316c1f/malloc/mtrace.c
This works with GDB 7.4. I'm not sure the GDB version shipped with RHEL 4.5 supports all the command used above. Building upstream GDB from source is in fact easier than Python though.
When trying to add strack traces to mtrace, make sure you don't call malloc() directly or indirectly in the GLIBC malloc hooks.

What needs to be done to get a distributable program from Eclipse?

I’ve produced a C++ program in Eclipse running on Redhat, which compiles and runs fine through Eclipse.
I thought that to run it separately to Eclipse you use the build artifact which is in the directory set via the project’s properties.
However this executable doesn’t run (I know it’s an executable as I’ve set it to be an executable via the project’s properties and it shows up as such via the ls command and the file explorer).
When attempting to run it using the executable’s name, I get the error:
bash: <filename>: command not found
When attempting to run it as a bash file:
<filename>: <filename>: cannot execute binary file
And when running it with "./" before the file name, nothing happens. Nothing new appears in the running processes and the terminal just goes to the next line as though I’d just pressed enter with no command.
Any help?
You've more or less figure out the first error yourself. when you just run <filename> , it is not in your PATH environment variable, so you get "command not found". You have to give a full or relative path when to the program in order to run it, even if you're in the same directory as the program - you run it with ./<filename>
When you do run your program, it appears to just exit as soon as you start it - we can't help much with that without knowing what the program does or see some code.
You can do some debugging, e.g. after the program just exits run echo $? to see if it exited with a particular exit value, or run your program using the strace tool to see what it does (or do it the usual way, insert printf debugging, or debug it with gdb)