Lets call our main code that takes requires a function f(x), Main.f90, and source codes S01.f90, S02.f90, etc. that have varying forms of f(x). I'd like to have Main.f90 output the data it computes based on the f(x) from Sxx.f90 to go into a folder "Sxx".
I compile through a bat file as gfortran -o RunMe.exe Sxx.f90 Main.f90.
At first it seems the code would need to be conscious of its' compiled components, but not only do I not know how to do this, I believe there's probably a much better way.
So far I have my code written to where I feed it a folder name from a .txt, but again, I'd like it to have it simply take what's already known from the source code.
If there are any other suggestions, please mention them! It doesn't have to be exactly as I stated. Here's the gist: Computation, Blueprint, Results. I want Computation in a father folder, and daughter folders, named after Blueprints, with results inside based on these Blueprints. The Blueprints can go in these daughter folders, or in their own. Whatever is simplest! Thanks!
How about simply generating several executables depending on which particular Sxx.f90 was used for the build? For example,
gfortran -o RunMe01.exe S01.f90 Main.f90
gfortran -o RunMe02.exe S02.f90 Main.f90
Related
I have a program (cpp) with many classes. Every class is in separate source file (.h + .cpp).
How can I split the compiled program into multiple files (instead of one big executable file)?
Let's say, one file for every class (same as the code structure).
So that every time there is change in a specific class, I compile only that class, and replace the specific compiled file related to that class.
(Something similar to .DLL files in Windows.)
Example from real life:
I am making TUI interface for managing mysql.
I would like to create mysql text editor (TUI) with ncurses.
the code (class) for creating and managing single window object is in
'textWin.cpp' + 'textWin.h'
the code (class) for managing multiple windows, by creating windows objects from previous class is in winMan.cpp winMan.h
the code (class) for managing mysql database is in :
mysql.cpp mysql.h
and so on...
so, I have the following files:
MyProgram.cpp
- winMan.cpp + winMan.h
- textWin.cpp + textWin.h
- mysql.cpp + mysql.h
- ..
- ..
After g++ compilation, I get one executable file, './MyProgram' (size about 15Mb.) which I deliver to all my customers (1000's of them).
I Just found a typo in textWin.cpp, I fixed it, and I told to all customers that there is an update... all of them need to download one big 15Mb file, this consumes allot of bandwidth and server resources, for just a small update.
Is there a way to send to all my customers smaller file, that contains only the compiled code for textWin class ?
I use g++ on Centos7
The gcc compiler will happily take a list of cpp files to compile together to make one executable. You don't need to write a "containing" cpp file. However, you still have the issue that each time it rebuilds them all.
The alternative is to build each sourcefile separately to an object file, then link those all together. Hopefully each of those invocations of the compiler will add up to less time than the single command-line. But how to keep track of which cpp files actually need to be rebuilt?
The usual approach is to use a makefile and a make utility which will check the dates of all the mentioned files. There are a variety of flavours of makefile, and helper makefile engines. Download a simple package like gzip and you can quickly get an idea of how the Makefile is structured. Then there is lots of help online, or you may decide that this is just too much trouble for a project with 5 files in it.
As suggested in the comments by #RSahu
Shared Libraries (.so files) is the way to split your compiled code.
here is a small example:
https://www.cprogramming.com/tutorial/shared-libraries-linux-gcc.html
Of course, you could put your texts into separate text-files and only deploy those in the an error is there. For your special use case, where binary differences must be deployed, this question might be helpful: How do I create binary patches?
Another option, do proper versioning. That way, your customers might be able to decide for themselves. That is, if they need this update.
I have seen one other answer link but what I don't understand is what is basis.cm and what's it's use?
You are asking two questions.
What is basis.cm and what's it's use?
This is the Basis library. It allows the use of built-in functions.
How to compile and execute a stand-alone SML-NJ executable
Assuming you followed Jesper Reenberg's tutorial on how to execute a heap image, the next thing you need in order to have SML/NJ produce a stand-alone executable is to convert this heap image. One should hypothetically be able to do this using heap2exec, a tool that takes the heap image, e.g. the .x86-linux file generated on my system, and generates an .asm file that can be assembled and linked.
Unfortunately, this tool is not very well-maintained, so you have to
Go to the smlnj.org page and fix the download-link by removing 'www.' (this page and the SourceForge page don't contain the same explanations or assumptions about argument count, and neither page's download link work).
Download and extract this tool, and fix the 'build' script so it points to your ml-build tool
Fix the tool's argument use by changing [inf, outf] to [_, inf, outf]
Run ./build which generates 'heap2asm.x86-linux' on my system
For example, in order to generate an .asm file for the heap2asm program itself, run
sml #SMLload heap2asm.x86-linux heap2asm.x86-linux heap2asm.s
At this point, I have unfortunately been unable to produce an executable that works. E.g. if you run gcc -c heap2asm.s and ld heap2asm.o, you get a warning of a missing _start label. The resulting executable segfaults even if you rename the existing _sml_heap_image label to _start. That is, it seems that a piece of entry code that the runtime environment normally delivers is missing here.
At this point, discard SML/NJ and use MLton for producing stand-alone binaries.
I have a bunch of C++ programs each in its own sub-directory. Each sub-directory has a single C++ program in several files -- a .h and a .cpp file for each class plus a main .cpp program. I want to compile each program placing the executable in the corresponding sub-directory. (I also want to run each program and redirect its output to a file that is placed in the corresponding sub-directory but if I can get the compilation to work, I shouldn't have a problem figuring out this part.)
I'm using the bash shell on a UNIX system (actually the UNIX emulator Cygwin that runs on top of Windows).
I've managed to find on the web, a short scrip for compiling one-file programs in the current directory but that's as far as I've gotten. That script is as follows.
for f in *.cpp;
do g++ -Wall -O2 "$f" -o "{f/.cpp/}";
done;
I would really appreciate it someone could help me out. I need to do this task on average once every two weeks (more like 8 weeks in a row, then not for 8 weeks, etc.)
Unless you're masochistic, use makefiles instead of shell scripts.
Since (apparently) each executable depends on all the .h and .cpp files in the same directory, the makefiles will be easy to write -- each will have something like:
whatever.exe: x.obj y.obj z.obj
g++ -o whatever.exe x.obj y.obj z.obj
You can also add a target in each to run the resulting executable:
run:
whatever.exe
With that you'll use make run to run the executable.
Then you'll (probably) want a makefile in the root directory that recursively makes the target in each subdirectory, then runs each (as described above).
This has a couple of good points -- primarily that it's actually built for this kind of task, so it actually does it well. Another is that it takes note of the timestamps on the files, so it only rebuilds the executables that actually need it (i.e., where at least one of the files that executable depends on has been modified since the executable itself was built).
Assuming you have a directory all of whose immediate subdirectories are all c++ programs, then use some variation on this...
for D in */; do cd "$D";
# then either call make or call your g++
# with whatever arguments in here
# or nest that script you found online if it seems to
# be doing the trick for you.
cd ../;
done;
That will move in to each directory, do its thing (whatever you want that to be) and then move back out.
We have a large set of C++ projects (GCC, Linux, mostly static libraries) with many dependencies between them. Then we compile an executable using these libraries and deploy the binary on the front-end. It would be extremely useful to be able to identify that binary. Ideally what we would like to have is a small script that would retrieve the following information directly from the binary:
$ident binary
$binary : Product=PRODUCT_NAME;Version=0.0.1;Build=xxx;User=xxx...
$ dependency: Product=PRODUCT_NAME1;Version=0.1.1;Build=xxx;User=xxx...
$ dependency: Product=PRODUCT_NAME2;Version=1.0.1;Build=xxx;User=xxx...
So it should display all the information for the binary itself and for all of its dependencies.
Currently our approach is:
During compilation for each product we generate Manifest.h and Manifest.cpp and then inject Manifest.o into binary
ident script parses target binary, finds generated stuff there and prints this information
However this approach is not always reliable for different versions of gcc..
I would like to ask SO community - is there better approach to solve this problem?
Thanks for any advice
One of the catches with storing data in source code (your Manifest.h and .cpp) is the size limit for literal data, which is dependent on the compiler.
My suggestion is to use ld. It allows you to store arbitrary binary data in your ELF file (so does objcopy). If you prefer to write your own solution, have a look at libbfd.
Let us say we have a hello.cpp containing the usual C++ "Hello world" example. Now we have the following make file (GNUmakefile):
hello: hello.o hello.om
$(LINK.cpp) $^ $(LOADLIBES) $(LDLIBS) -o $#
%.om: %.manifest
ld -b binary -o $# $<
%.manifest:
echo "$#" > $#
What I'm doing here is to separate out the linking stage, because I want the manifest (after conversion to ELF object format) linked into the binary as well. Since I am using suffix rules this is one way to go, others are certainly possible, including a better naming scheme for the manifests where they also end up as .o files and GNU make can figure out how to create those. Here I'm being explicit about the recipe. So we have .om files, which are the manifests (arbitrary binary data), created from .manifest files. The recipe states to convert the binary input into an ELF object. The recipe for creating the .manifest itself simply pipes a string into the file.
Obviously the tricky part in your case isn't storing the manifest data, but rather generating it. And frankly I know too little about your build system to even attempt to suggest a recipe for the .manifest generation.
Whatever you throw into your .manifest file should probably be some structured text that can be interpreted by the script you mention or that can even be output by the binary itself if you implement a command line switch (and disregard .so files and .so files hacked into behaving like ordinary executables when run from the shell).
The above make file doesn't take into account the dependencies - or rather it doesn't help you create the dependency list in any way. You can probably coerce GNU make into helping you with that if you express your dependencies clearly for each goal (i.e. the static libraries etc). But it may not be worth it to take that route ...
Also look at:
C/C++ with GCC: Statically add resource files to executable/library and
Is there a Linux equivalent of Windows' "resource files"?
If you want particular names for the symbols generated from the data (in your case the manifest), you need to use a slightly different route and use the method described by John Ripley here.
How to access the symbols? Easy. Declare them as external (C linkage!) data and then use them:
#include <cstdio>
extern "C" char _binary_hello_manifest_start;
extern "C" char _binary_hello_manifest_end;
int main(int argc, char** argv)
{
const ptrdiff_t len = &_binary_hello_manifest_end - &_binary_hello_manifest_start;
printf("Hello world: %*s\n", (int)len, &_binary_hello_manifest_start);
}
The symbols are the exact characters/bytes. You could also declare them as char[], but it would result in problems down the road. E.g. for the printf call.
The reason I am calculating the size myself is because a.) I don't know whether the buffer is guaranteed to be zero-terminated and b.) I didn't find any documentation on interfacing with the *_size variable.
Side-note: the * in the format string tells printf that it should read the length of the string from the argument and then pick the next argument as the string to print out.
You can insert any data you like into a .comment section in your output binary. You can do this with the linker after the fact, but it's probably easier to place it in your C++ code like this:
asm (".section .comment.manifest\n\t"
".string \"hello, this is a comment\"\n\t"
".section .text");
int main() {
....
The asm statement should go outside any function, in this instance. This should work as long as your compiler puts normal functions in the .text section. If it doesn't then you should make the obvious substitution.
The linker should gather all the .comment.manifest sections into one blob in the final binary. You can extract them from any .o or executable with this:
objdump -j .comment.manfest -s example.o
Have you thought about using standard packaging system of your distro? In our company we have thousands of packages and hundreds of them are automatically deployed every day.
We are using debian packages that contain all the neccessary information:
Full changelog that includes:
authors;
versions;
short descriptions and timestamps of changes.
Dependency information:
a list of all packages that must be installed for the current one to work correctly.
Installation scripts that set up environment for a package.
I think you may not need to create manifests in your own way as soon as ready solution already exists. You can have a look at debian package HowTo here.
This is a question for experienced C/C++ developpers.
I have zero knowledge of compiling C programs with "make", and need to modify an existing application, ie. change its "config" and "makefile" files.
The .h files that the application needs are not located in a single-level directory, but rather, they are spread in multiple sub-directories.
In order for cc to find all the required include files, can I just add a single "-I" switch to point cc to the top-level directory and expect it to search all sub-dirs recursively, or must I add several "-I" switches to list all the sub-dirs explicitely, eg. -I/usr/src/myapp/includes/1 -I/usr/src/myapp/includes/2, etc.?
Thank you.
This question appears to be about the C compiler driver, rather than make. Assuming you are using GCC, then you need to list each directory you want searched:
gcc -I/foo -I/foo/bar myprog.c
This is actually a compiler switch, unrelated to make itself.
The compiler will search for include files in the built-in system dirs, and then in the paths you provide with the -I switch. However, no automatic sub-directory traversal is made.
For example, if you have
#include "my/path/to/file.h"
and you give -I a/directory as a parameter, the compiler will look for a/directory/my/path/to/file.h.
If the makefiles are written in the usual way, the line that invokes the compiler will use a couple of variables that allow you to customize the details, e.g. not
gcc (...)
but
$(CC) $(CFLAGS) (...)
and if this is the case, and you're lucky, you don't even need to edit any of the makefiles; instead you can invoke make like this
make CFLAGS='-I /absolute-path/to/wherever'
to incorporate your special options into the compiler invocation.
Also check whether the Makefiles aren't generated by something else (usually, a script in the top directory called
configure
which will have options of its own to control what goes into them).
everyone answered your question correctly. but something to consider when you get to setup your own source tree.... a leaf node should only look 2 places for headers, in its own directory or up the tree. once people start going across to peers and down the tree, the build system will get gnarly, but what also happens is folks with start using private interfaces when they should be using public interfaces