How to upload multiple main functions to 1 GitHub repository? - c++

I want to upload my version of the solution of the book "Programming Principles and Practice Using C++" to GitHub, in a way that anyone interested can access the code and run it easily.
Assuming that 1 repository can hold 1 project only, and since the solution code for every drill or excercise of the book has to be an independent .cpp file/main function, how can I group all of the code in the same repository, and at the same time allow anyone, who clone or download the file, to run and debug independently? (...because there cannot be two main functions in one project right?)

Separate folder for each solution could do the trick.
You can go even with separating using "one solution" = "one .cpp file" scheme. In this case build is assumed to be done without make file, by specifying .cpp file to the compiler. But it will create mess if some solution require multiple .h / .cpp files.

My misc-basile github repository contains several single source programs (GPLv3+ licensed, for Linux), including manydl.c -which shows that Linux is capable of dealing with many (that is, more than a hundred thousands) plugins by dynamically generating them, and sync-periodically.c -which calls periodically the sync(2) system call.
You can do likewise for your single file C++ programs.
You could write your Makefile compiling each of them to a different executable. Or use some other build automation (e.g. ninja).
However, don't forget to document (in your README.md) what each of your C++ translation units is supposed to do, and how to compile and run them.
Read more about C++, about operating systems, about your C++ compiler (e.g. GCC) and linker (e.g. binutils) and other software development tools (e.g. git, gdb, emacs).
Details are of course implementation specific.

Related

What is the SVN best practice for storing source when developing and testing with IDEs?

I do a fair amount of personal development on my computer and have used TortoiseSVN (I'm on windows) for web projects, but haven't used any version control for other languages. Anyways, soon I will be starting a decent sized C++ project and was going to try using SVN for it.
For web development, I normally just used notepad++ and it was really easy to manage it with SVN (just commit the whole source folder). However, for this project I will be using an IDE (most likely Eclipse CDT or Visual Studio) and was wondering what the best practice is to manage all of the IDE, project, and binary files. My guess was to make the IDE project outside of the version control, and just point to all of the source files into the SVN so all of the build and project files aren't committed. This way the only files in the SVN would be the .cpp and .h files.
However, if I wanted to switch to a new branch, then I would need to update the location of all of the source and headers to the new folder which seems like it would be a huge hassle.
Whats the best way to handle this?
Thanks
Ok, it seem I misgot the aim of the question in the first round. Now I'm assuming what is asked really to what to put under source control and what not.
Well, naturally everything but temporary/transient files.
If you install GitExtensions, it right away has a feature to populate the .gitignore file. Certainly depending on language you adjust it. Sure, solution, project, make files belong under control. .USER files storing some IDE preferences do not. As both IDEs and source control is ubiquitously used the content is fairly separated for many years, and should be pretty obvious as you do it.
External dependencies normally also shall be in a repo, though choice shall be made in which one. Some store everything together, others keep one dependency repo, others separate repos per component -- all depends on actual components and workflow. And you can replace physical storage of deps by an info file with stable links to the used version. It may also be covered later on the first change in dependencies.
For Visual Studio, there is a plugin that manages your files for you. As long as the files are part of the project, then they will be put into source control by the plugin. See ankhsvn for plugin info. Note that the express versions of Visual Studio are not supported.
I am sure eclipse has a plugin for SVN as well.

How to keep a cross-platform library in sync across XCode/Visual Studio

I'm developing a system which will have a PC (windows) component and an iPad component. I'd like to share some C++ code between the iPad and the PC. Is there a way to automatically sync the source files between the project? In other words, if I'm working on the PC and add a new .h/.cpp pair, can I somehone get the xcode project to recognize the new files and add them to the xcode project? Same goes for getting Visual Studio to recognize new files on the PC end.
If this isn't possible, would it make sense to use Eclipse on both the Mac and the PC for this shared library? Is there any other option I should look in to for maintaining a project on both Apple and Windows development environments?
First, you need one common build configuration for all your target platforms. Of course, this means that you can't use the build configurations tied to your IDEs (Visual Studio, XCode, etc.). You need a cross-platform build-system. The best candidate for that, IMO, is CMake. With that system, the CMakeLists.txt files are the primary configuration files for your project. Any new source files / headers will have to be added to that configuration file (or one of them). It might be a little bit less convenient than using the in-IDE facilities to add a header/source pair, but the advantage is that you only have to add the source file once to the build configuration (CMakeLists.txt) and it will apply to all operating systems and IDEs that you are using. CMake can be used to generate project files for most IDEs so that they can be used easily, and some of the better IDEs also support CMake build-configurations directly (which makes it even more convenient). Personally, I don't know of any serious cross-platform project that does not employ an independent cross-platform build-system (like CMake or others with similar capabilities), so this is not really much of a debate anymore.
Second, you need a means to synchronize your files between the two systems, which I presume are physically separated (i.e., not in a virtual box or whatever). There are simple programs like rsync and other more GUI-ish programs to synchronize folders and all its underlying files. However, for source code, it is much more convenient to use a version-control system. Personally, I recommend Git, especially for personal projects. There are many features to a version control system, but the basic thing is that it gives you a simple way to keep source folders synchronized and keep track of the changes that have been made to the code (e.g., allowing to back-track if a bug suddenly appears out of the latest changes). Even if you are working alone, it is still totally worth it to use such a system (and even if you don't really need it, it gives you experience working with one). Git is a decentralized system, meaning that you don't need a central server for the version control, it is all local to each copy of the repository. This allows you to have (as I do for some simple projects), a completely local set of repositories, for instance, I have two computers I work with, with a copy of the repository on each of them, plus a copy of the repository on an external hard-drive, so all the synchronization is done locally between the computers and external drive (with the added bonus of a constantly up-to-date triple backup of everything). You can also use a central server, such as github, which is even more convenient.

g++: Use ZIP files as input

We have the Boost library in our side. It consists of a huge number of files which never change and only a tiny portion of it is used. We swap the whole boost directory if we are changing versions. Currently we have the Boost sources in our SVN, file by file which makes the checkout operations very slow, especially on Windows.
It would be nice if there were a notation / plugin to address C++ files inside ZIP files, something like:
// #ZIPFS ASSIGN 'boost' 'boost.zip/boost'
#include <boost/smart_ptr/shared_ptr.hpp>
Are there any support for compiler hooks in g++? Are there any effort regarding ZIP support? Other ideas?
I assume that make or a similar buildsystem is involved in the process of building your software. I'd put the zip file in the repository, and add a rule to the Makefile to extract it before the actual build starts.
For example, suppose your zip file is in the source tree at "external/boost.zip", and it shall be extracted to "external/boost", and it contains at its toplevel a file "boost_version.h".
# external/Makefile
unpack_boost: boost/boost_version.h
boost/boost_version.h: boost.zip
unzip $<
I don't know the exact syntax of the unzip call, ask your manpage about this.
Then in other Makefiles, you can let your source files depend on the unpack_boost target in order to have make unpack Boost before a source file is compiled.
# src/Makefile (excerpt)
unpack_boost:
make -C ../external unpack_boost
source_file.cpp: unpack_boost
If you're using a Makefile generator (or an entirely different buildsystem), please check the documentation for these programs for how to create something like the custom target unpack_boost. For example, in CMake, you can use the add_custom_command directive.
The fine print: The boost/boost_version.h file is not strictly necessary for the Makefile to work. You could just put the unzip command into the unpack_boost target, but then the target would effectively be phony, that is: it would be executed during each build. The file inbetween (which of course you need to replace by a file which is actually present in the zip archive) ensures that unzip only runs if necessary.
A year ago I was in the same position as you. We kept our source in SVN and, even worse, included boost in the same repository (same branch) as our own code. Trying to work on multiple branches was impossible, as it would take most of a day to check-out a fresh working copy. Moving boost into a separate vendor repository helped, but it would still take hours to check-out.
I switched the team over to git. To give you an idea of how much better it is than SVN, I have just created a repository containing the boost 1.45.0 release, then cloned it over the network. (Cloning copies all of the repository history, which in this case is a single commit, and creates a working copy.)
That clone took six minutes.
In the first six seconds a compressed copy of the repository was copied to my machine. The rest of the time was spent writing all of those tiny files.
I heartily recommend that you try git. The learning curve is steep, but I doubt you'll get much pre-compiler hacking done in the time it would take to clone a copy of boost.
We've been facing similar issues in our company. Managing boost versions in build environments is never going to be easy. With 10+ developers, all coding on their own system(s), you will need some kind of automation.
First, I don't think it's good idea to store copies of big libraries like boost in SVN or any SCM system for that matter, that's not what those systems are designed for, except if you plan to modify code in boost yourself. But let's assume you're not doing that.
Here's how we manage it now, after trying lots of different methods, this works best for us.
For every version of boost that we use, we put the whole tree (unzipped) on a file server and we add extra subdirectories, one for each architecture/compiler-combination, where we put the compiled libraries.
We keep copies of these trees on every build system and in the global system environment we add variables like:
BOOST_1_48=C:\boost\1.48 # Windows environment var
or
BOOST_1_48=/usr/local/boost/1.48 # Linux environment var, e.g. in /etc/profile.d/boost.sh
This directory contains the boost tree (boost/*.hpp) and the added precompiled libs (e.g. lib/win/x64/msvc2010/libboost_system*.lib, ...)
All build configurations (vs solutions, vs property files, gnu makefiles, ...) define an internal variable, importing the environment vars, like:
BOOSTROOT=$(BOOST_1_48) # e.g. in a Makefile, or an included Makefile
and further build rules all use the BOOSTROOT setting for defining include paths and library search paths, e.g.
CXXFLAGS += -I$(BOOSTROOT)
LFLAGS += -L$(BOOSTROOT)/lib/linux/x64/ubuntu/precise
LFLAGS += -lboost_date_time
The reason for keeping local copies of boost is compilation speed. It takes up quite a bit of disk space, especially the compiled libs, but storage is cheap and a developer losing lots of time compiling code is not. Plus, this only needs to be copied once.
The reason for using global environment vars is that build configurations are transferrable from one system to another, and can thus be safely checked in to your SCM system.
To smoothen things a bit, we've developed a little tool that takes care of the copying and setting the global environment. With a CLI, this can even be included in the build process.
Different working environments mean different rules and cultures, but believe me, we've tried lots of things and finally, we decided to define some kind of convention. Maybe ours can inspire you...
This is something you would not do in g++, because any other application that wants to do it would also have to be modified.
Store the files on a compressed filesystem. Then every application gets the benefit automatically.
It should be possible in an OS to allow transparent access to files inside a ZIP file. I know that I put it in the design of my own OS a long time ago (2004 or so) but never got it to a point where it was usable. The downside is that seeking backwards in a file inside a ZIP is slower as it's compressed (and you can't rewind the compressor state, so you have to seek from the start instead). This also makes using a zip-inside-a-zip slow for rewinding and reading. Fortunately, most cases just read a file sequentially.
It should also be retrofittable to current OSes, at least in client space. You can hook the filesystem access functions used (fopen, open, ...) and add a set of virtual file descriptors that your own software would return for a given filename. If it's a real file just pass it on, if it's not open the underlying file (possibly again via this very function) and pass a virtual handle. When accessing the file contents, read directly from the zip file without caching.
On Linux you would use an LD_PRELOAD to inject it into existing software (at usage time), on Windows you can hook the system calls or inject a DLL into the space of software to hook the same functions.
Does anybody know if this already exists? I can't see any clear reason it wouldn't...

What is the difference between compile code and executable code?

I always use the terms compile and build interchangeably.
What exactly do these terms stand for?
Compiling is the act of turning source code into object code.
Linking is the act of combining object code with libraries into a raw executable.
Building is the sequence composed of compiling and linking, with possibly other tasks such as installer creation.
Many compilers handle the linking step automatically after compiling source code.
From wikipedia:
In the field of computer software, the term software build refers either to the process of converting source code files into standalone software artifact(s) that can be run on a computer, or the result of doing so. One of the most important steps of a software build is the compilation process where source code files are converted into executable code.
While for simple programs the process consists of a single file being compiled, for complex software the source code may consist of many files and may be combined in different ways to produce many different versions.
A build could be seen as a script, which comprises of many steps - the primary one of which would be to compile the code.
Others could be
running tests
reporting (e.g. coverage)
static analysis
pre and post-build steps
running custom tools over certain files
creating installs
labelling them and deploying/copying them to a repository
They often are used to mean the same thing. However, "build" may also mean the full process of compiling and linking a whole application (in the case of e.g. C and C++), or even more, including, among others
packaging
automatic (unit and/or integration) testing
installer generation
installation/deployment
documentation/site generation
report generation (e.g. test results, coverage).
There are systems like Maven, which generalize this with the concept of lifecycle, which consists of several stages, producing different artifacts, possibly using results and artifacts from previous stages.
From my experience I would say that "compiling" refers to the conversion of one or several human-readable source files to byte code (object files in C) while "building" denominates the whole process of compiling, linking and whatever else needs to be done of an entire package or project.
Most people would probably use the terms interchangeably.
You could see one nuance : compiling is only the step where you pass some source file through the compiler (gcc, javac, whatever).
Building could be heard as the more general process of checking out the source, creating a target folder for the compiled artifacts, checking dependencies, choosing what has to be compiled, running automated tests, creating a tar / zip / ditributions, pushing to an ftp, etc...

Lisp Executable

I've just started learning Lisp and I can't figure out how to compile and link lisp code to an executable.
I'm using clisp and clisp -c produces two files:
.fas
.lib
What do I do next to get an executable?
I was actually trying to do this today, and I found typing this into the CLisp REPL worked:
(EXT:SAVEINITMEM "executable.exe"
:QUIET t
:INIT-FUNCTION 'main
:EXECUTABLE t
:NORC t)
where main is the name of the function you want to call when the program launches, :QUIET t suppresses the startup banner, and :EXECUTABLE t makes a native executable.
It can also be useful to call
(EXT:EXIT)
at the end of your main function in order to stop the user from getting an interactive lisp prompt when the program is done.
EDIT: Reading the documentation, you may also want to add :NORC t
(read link). This suppresses loading the RC file (for example, ~/.clisprc.lisp).
This is a Lisp FAQ (slightly adapted):
*** How do I make an executable from my programme?
This depends on your implementation; you will need to consult your
vendor's documentation.
With ECL and GCL, the standard compilation process will
produce a native executable.
With LispWorks, see the Delivery User's Guide section of the
documentation.
With Allegro Common Lisp, see the Delivery section of the
manual.
etc...
However, the classical way of interacting with Common Lisp programs
does not involve standalone executables. Let's consider this during
two phases of the development process: programming and delivery.
Programming phase: Common Lisp development has more of an
incremental feel than is common in batch-oriented languages, where an
edit-compile-link cycle is common. A CL developer will run simple
tests and transient interactions with the environment at the
REPL (or Read-Eval-Print-Loop, also known as the
listener). Source code is saved in files, and the build/load
dependencies between source files are recorded in a system-description
facility such as ASDF (which plays a similar role to make in
edit-compile-link systems). The system-description facility provides
commands for building a system (and only recompiling files whose
dependencies have changed since the last build), and for loading a
system into memory.
Most Common Lisp implementations also provide a "save-world" mechanism
that makes it possible to save a snapshot of the current lisp image,
in a form which can later be restarted. A Common Lisp environment
generally consists of a relatively small executable runtime, and a
larger image file that contains the state of the lisp world. A common
use of this facility is to dump a customized image containing all the
build tools and libraries that are used on a given project, in order
to reduce startup time. For instance, this facility is available under
the name EXT:SAVE-LISP in CMUCL, SB-EXT:SAVE-LISP-AND-DIE in
SBCL, EXT:SAVEINITMEM in CLISP, and CCL:SAVE-APPLICATION in
OpenMCL. Most of these implementations can prepend the runtime to the
image, thereby making it executable.
Application delivery: rather than generating a single executable
file for an application, Lisp developers generally save an image
containing their application, and deliver it to clients together with
the runtime and possibly a shell-script wrapper that invokes the
runtime with the application image. On Windows platforms this can be
hidden from the user by using a click-o-matic InstallShield type tool.
Take a look at the the official clisp homepage. There is a FAQ that answers this question.
http://clisp.cons.org/impnotes/faq.html#faq-exec
CLiki has a good answer as well: Creating Executables
For a portable way to do this, I recommend roswell.
For any supported implementation you can create lisp scripts to run the program that can be run in a portable way by ros which can be used in a hash-bang line similarly to say a python or ruby program.
For SBCL and CCL roswell can also create binary executables with ros dump executable.
I know this is an old question but the Lisp code I'm looking at is 25 years old :-)
I could not get compilation working with clisp on Windows 10.
However, it worked for me with gcl.
If my lisp file is jugs2.lisp,
gcl -compile jugs2.lisp
This produces the file jugs2.o if jugs2.lisp file has no errors.
Run gcl with no parameters to launch the lisp interpreter:
gcl
Load the .o file:
(load "jugs2.o")
To create an EXE:
(si:save-system "jugs2")
When the EXE is run it needs the DLL oncrpc.dll; this is in the <gcl install folder>\lib\gcl-2.6.1\unixport folder that gcl.bat creates.
When run it shows a lisp environment, call (main) to run the main function
(main).