In a regular VisualStudio project you can setup the precompiled header by going to Configuration Properties -> C/C++ -> Precompiled Headers.
However, when dealing with a Cross Platform (Linux) project, the Precompiled Header option is missing.
Is there any way to configure the Precompiled Header?
Apparently in VisualStudio 2022 there's still no easy way to add support for precompiled headers. But luckily I've stumbled upon on this interesting solution provided by GTANAdam on his Github page: https://gist.github.com/GTANAdam/3703fd5cb70d9a22159f58443a129f8b
With some minor changes I've managed to make things work for my projects.
The chmod +x precompile.sh is needed, else you might get the error bash: ./precompile.sh: Permission denied
And instead of using the suggested sh precompile.sh I've ended up using ./precompile.sh. Additionally the precompile.sh file now contains the header #!/bin/bash (instead of #!/bin/sh)
Sample of my precompile.sh file:
#!/bin/bash
gch="PCH.hpp.gch"
header="PCH.hpp"
function compile
{
echo "Generating precompiled header.."
g++ -I../ -std=c++14 -c -x c++-header PCH.cpp -o PCH.hpp.gch
}
if [[ -f $gch ]]; then # File exists
if [[ $header -nt $gch ]]; then
compile
else
echo "Precompiled header already exists.."
fi
else
compile
fi
Some things to note. If you're editing your bash script file on windows platform, note that on windows, at every new line of the script, the text editors might add CR LF instead of just LF, and Linux bash might start dropping errors like: : not foundsh: 2: precompile.sh:
Related
So I have started to use Sublime Text 3 recently with my Ubuntu OS. I wanted to test it out so wrote a simple piece of c++ code. But when I try to build it does nothing, I have checked online and still nothing I even installed a build system (https://github.com/shikharkunal99/Sublime-Build-System) and still whenever I go to build it just opens open a black section at the bottom (see picture)
Install g++ to run c++ code
apt-get install g++
Then I will tell you a personal trick that I used. it is:
find | grep "part of your filename"
Replace "part of your filename" section with the name of the file or a part of the name of the file.
Suppose, the file name is Here.c. I type "Here" in place of part of your filename.
Then the final step, type
./a.out
Output is ready in front of you.
This post will help you in setting up Sublime Text 3 in a way that leads to a good workflow specifically for C++ programming environment (Ubuntu, GNU C++ Compiler) :
Note: Only the following step is essential for running c++ programs.
1. Create a Build System in Sublime Editor :
Sublime Text provides build systems to allow users to run external programs.
Go to Tools -> Build System -> New Build System.
Paste the following code in the file
{
"cmd": ["g++ -Wall -Wextra -O2 -pthread -H -std=c++17 \"${file}\" -o runfile && ./runfile <input.in> output.out"],
//above line works fine if input.in and output.out files are present in same directory in which .cpp file is present else add complete address of these files for using them as common input output files in your system.
"shell":true,
"working_dir":"$file_path",
"selector":"source.c,source.c++,source.cpp",
"variants": [
{
"name": "Variant Run",
"cmd" : ["gnome-terminal -- bash -c \"g++ $file_name ;echo ------------Output-------------; ./a.out;echo;echo; echo Press ENTER to continue; read line;exit; exec bash\""
],
}
]
}
Save the file (By default the file is placed in "~/.config/sublime-text-3/Packages/User" dir) something like "C++17.sublime-build" to differentiate it from the other build system files.
Create input.in and output.out text files in your working directory. This can be used for piping input from the input.in file, and output to the output.out file.
Note in the first line it uses the -std=c++17 flag to enable the latest features of C++17. If you don't want this or want to use C++14, replace this with the -std=c++14 flag.
Refer to https://linux.die.net/man/1/g++ for different compiler flags.
See Also https://discuss.codechef.com/t/are-any-compiler-flags-set-on-the-online-judge/1866
2. Setup window layout :
Create three new c++ file, file.cpp. Select View > Layout > Columns : 3. This will create three columns in the workspace. Select View > Groups > Max Columns : 2.
Write a hello world program & save inputs if any in the input.in file, and test its working. Use Shift+Ctrl+B and Select C++17 to build and execute the file (If selected C++17 - Variant Run it will execute the program in a separate terminal window like a normal program would).
The windows will look like this when you are done.
Layout Preview
3. Precompile headers :
Generally useful in competitive programming, we can speed up compilation time by precompiling all the header files as mentioned here, i.e. by precompiling the bits/stdc++.h header file.
For this, first, navigate to the stdc++.h file. This will be located at a directory similar to ~/usr/include/x86_64-linux-gnu/c++/9/bits Open terminal window here.
Run the command sudo g++ -std=c++17 stdc++.h, to compile the header. Take care to use the same flags you used in your build system. Check to make sure that the stdc++.h.gch file was created in that directory.
4. Sublime Text features :
Snippets & Completion
Read up on the documentation of snippets and completions at the official guide.
5. Other Features :
Read https://scotch.io/bar-talk/best-of-sublime-text-3-features-plugins-and-settings
This program works perfectly fine for me using Build 3120, and I expect it will work fine with previous builds. First, you need to select Tools → Build System → C++ Single File (Tools → Build System → Automatic should also work, but I prefer to be explicit). Then, either hit CtrlShiftB or select Tools → Build With… and select C++ Single File - Run. This will compile your .cpp file to an executable in the same directory as the source file, then run it.
Well I also got various issues with this thing finally I got an amazing thing in the package control pallet.Follow the instructions:
1.Open up the Package control Pallet
2.Search for C++ Builder
3.You will see C++ Builder-Mingyang Yang
4.click it and then wait for a couple of seconds
5.finally go to tools->build system->select C++ Builder-Mingyang Yang
6.finally tap the Shift+Ctrl+B and then select C++ Builder-Mingyang Yang Build and Run
7.finally here you go you can not only build this but also use the console for input
Note:This will execute only when there is gcc compiler included in the terminal otherwise at first install gcc by the command apt-get install gcc then you can use c++
For example, if I developed a c++ project and put it on GitHub as a open source project. In my project I used log4cpp as a 3rd-party library to do log stuff. Interested users would download my project and run "make" and suddenly found their machines just don't recognize log4cpp.
I need you guys to verify my original thoughts:
Unlike Java which you may simply include the 3rd-party jar file in your package, I know that C++ is kind of different: you have to let the client compile the library by themselves. It is obvious that, if I do not have the source code from the 3rd-party, I simply cannot help the client to install the libraries. Instead, I need to check if the user has already installed those required library and if not, just warn them not proceed.
If these statements were true, what tools should I consider to use to check? Also, an add-on question: if I want to produce a non-open source product, what should I do? (Since the client need my source code to compile.) If not(Especially the compile by themselves part), please give me your explanation.
Thanks!
Example makefile:
ifeq "$(shell echo '\#include <readline/readline.h>\nint main(){return 0;}' | $(CC) -x c -Wall -O -o /dev/null > /dev/null 2> /dev/null - && echo $$? )" "0"
HAS_FILE = yes
else
HAS_FILE = no
endif
all:
echo has_file=$(HAS_FILE)
I will not say that this is a recommended method, I just use it for simple checks. Autoconf may be a better option.
I'm creating an R package that uses a third-party (closed-source) API for importing .edf files into R (from SR Research Eyelink eye trackers). Someone who has already gotten this to work in Linux has shared his code, and I was able to get it to work on Mac. It was a matter of changing the src/Makevars files to point to the API as it's installed on the mac:
PKG_LIBS=-framework edfapi -F/Library/Frameworks/
To make it work in linux, Makevars needs to have:
PKG_LIBS=-L/usr/local/lib -ledfapi -lm
I know that for windows-specific options, I need to create a Makevars.win file, but how do I have the build options change for Mac versus Linux? I would like to do something like:
if [[ `uname` -eq Darwin ]] ; then
PKG_LIBS=-framework edfapi -F/Library/Frameworks/
fi
if [[ `uname` -eq Linux ]] ;then
PKG_LIBS=-L/usr/local/lib -ledfapi -lm
fi
but putting this into Makevars doesn't work. From researching this it seems that I need a combination of setting options in configure and Makevars, but I haven't quite figured it out. I am comfortable with R programming and know just enough C++ to make some basic functions, but I still don't understand all the nuances involved in the building process. If someone could explain the main purpose of configure/configure.ac versus Makevars/Makevars.in that would be helpful as well.
Ideally I would like to bundle the API along with the R package and have the different versions in a platform-specific folder. The API consists of just 3 header files and a binary (and it rarely changes). I realize this would prevent me from putting the package on CRAN but that is fine. I've managed to successfully build the package with the API files in a different folder, but at runtime it still looks for it in the standard spot (/Library/Frameworks). I realize this is a more loaded question and I can create a separate post as well.
This post helped me figure it out: stackoverflow.com/a/32590600/1457051
configure (in the package root directory) looks like this:
#!/bin/bash
#make the Makevars file
if [ ! -e "./src/Makevars" ]; then
touch ./src/Makevars
fi
#if mac
if [[ `uname` -eq Darwin ]] ; then
echo "PKG_LIBS=-framework edfapi -F/Library/Frameworks/" > ./src/Makevars
#if linux
elif [[ `uname` -eq Linux ]] ;then
echo "PKG_LIBS=-L/usr/local/lib -ledfapi -lm" > ./src/Makevars
fi
Makevars is created and the appropriate options are added based on the platform. There may be a more direct solution, but this works for my purposes.
My vim has path settings as shown below.
path=.,/usr/include,,
I think this is a default setting of 'path' I guess.
Because of this, g f opens c header files under the cursor.
But on C++ file C++ header files are not opened because the C++ header file location is not added to path variable of vim.
set path+=/usr/include/c++/4.6
I think that this setting on vimrc would be a solution.
But the problem is the actual directory location for C++ header file would be changed in every different linux distributions and g++ compiler versions.
How can I set path for c++ header files in a portable manner?
let g:gcpp_headers_path = system("g++ --version | grep g++ | awk '{print \"/usr/include/c++/\"$NF}'")
execute 'set path+=' . g:gcpp_headers_path
Now I am using this above:
This works with g++ environment. Not tested with other compilers.
If there's a limited number of locations, a simple conditional in ~/.vimrc will do:
if isdirectory('/usr/include/c++/4.6')
set path+=/usr/include/c++/4.6
elseif isdirectory(...
If you have a lot of different systems, and don't want to maintain all variations in a central place, you can move the system-dependent settings to a separate, local-only file, and invoke that from your ~/.vimrc, like this:
" Source system-specific .vimrc first.
if filereadable(expand('~/local/.vimrc'))
source ~/local/.vimrc
endif
I recently had the same problem, so here is my solution for documentation purposes:
1) I added the following to my .bashrc:
# add additional search paths to vim.
VIM_EXTPATHS="$HOME/.vim.extpaths"
if [ ! -e "$VIM_EXTPATHS" ] || [ "/usr/bin/cpp" -nt "$VIM_EXTPATHS" ]; then
echo | cpp -v 2>&1 | \
awk '/^#include </ { state=1 } /End of search list/ { state=0 } /^ / && state { print "set path+=" substr($0, 2) "/**2" }' > $VIM_EXTPATHS
fi
2) I added the following to my .vimrc:
" add extra paths.
let s:extpaths=expand("$HOME/.vim.extpaths")
if filereadable(s:extpaths)
execute "source ".s:extpaths
endif
On my system, the contents of the .vim.extpaths file are as follows:
set path+=/usr/lib/gcc/x86_64-linux-gnu/8/include/**2
set path+=/usr/local/include/**2
set path+=/usr/lib/gcc/x86_64-linux-gnu/8/include-fixed/**2
set path+=/usr/include/x86_64-linux-gnu/**2
set path+=/usr/include/**2
The **2 means that ViM will search two directories deep inside these directories. Now gf will find all the C++ headers I need. If you increase the depth, searches will take a lot more time, so don't set this number too high.
#note: for #include <chrono>, ViM will go to /usr/include/boost/chrono, which, funny enough, is a directory. I wonder why go file will open a directory, maybe this should be reported as a bug. To get to the correct chrono header you have to type 2gf.
The following Vimscript code, intended for a .vimrc file, updates path to include the search paths used by the preprocessor.
if executable('gcc')
let s:expr = 'gcc -Wp,-v -x c++ - -fsyntax-only 2>&1 | grep "^ " | sed "s/^ //"'
let s:lines = systemlist(s:expr)
for s:line in s:lines
execute 'set path+=' . fnameescape(s:line)
endfor
endif
I have similar code in my .vimrc, but with additional special-case handling.
There are specific environment variables for the compiler to examine. If you are using gcc/g++ in a linux/Unix environment, then the variables are C_INCLUDE_PATH and CPLUS_INCLUDE_PATH. If you are using bash/sh then use export VARIABLE=value or if you are using csh/tcsh then use setenv VARIABLE value or if you are using some other shell then you will need to look that up. In these examples VARIABLE is either C_INCLUDE_PATH and CPLUS_INCLUDE_PATH. I hope this helps.
My job mostly consists of engineering analysis, but I find myself distributing code more and more frequently among my colleagues. A big pain is that not every user is proficient in the intricacies of compiling source code, and I cannot distribute executables.
I've been working with C++ using Boost, and the problem is that I cannot request every sysadmin of every network to install the libraries. Instead, I want to distribute a single source file (or as few as possible) so that the user can g++ source.c -o program.
So, the question is: can you pack the Boost libraries with your code, and end up with a single file? I am talking about the Boost libraries which are "headers only" or "templates only".
As an inspiration, please look at the distribution of SQlite or the Lemon Parser Generator; the author amalgamates the stuff into a single source file which is trivial to compile.
Thank you.
Edit:
A related question in SO is for Windows environment. I work in Linux.
There is a utility that comes with boost called bcp, that can scan your source and extract any boost header files that are used from the boost source. I've setup a script that does this extraction into our source tree, so that we can package the source that we need along with our code. It will also copy the boost source files for a couple of boost libraries that we use that are no header only, which are then compiled directly into our applications.
This is done once, and then anybody who uses the code doesn't even need to know that it depends on boost. Here is what we use. It will also build bjam and bcp, if they haven't been build already.
#!/bin/sh
BOOST_SRC=.../boost_1_43_0
DEST_DIR=../src/boost
TOOLSET=
if ( test `uname` = "Darwin") then
TOOLSET="--toolset=darwin"
fi
# make bcp if necessary
if ( ! test -x $BOOST_SRC/dist/bin/bcp ) then
if ( test -x $BOOST_SRC/tools/jam/*/bin.*/bjam ) then
BJAM=$BOOST_SRC/tools/jam/*/bin.*/bjam
else
echo "### Building bjam"
pushd $BOOST_SRC/tools/jam
./build_dist.sh
popd
if ( test -x $BOOST_SRC/tools/jam/*/bin.*/bjam ) then
BJAM=$BOOST_SRC/tools/jam/*/bin.*/bjam
fi
fi
echo "BJAM: $BJAM"
pushd $BOOST_SRC/tools/bcp
echo "### Building bcp"
echo "$BJAM $TOOLSET"
$BJAM $TOOLSET
if [ $? == "0" ]; then
exit 1;
fi
popd
fi
if ( ! test -x $BOOST_SRC/dist/bin/bcp) then
echo "### Couldn't find bpc"
exit 1;
fi
mkdir -p $DEST_DIR
echo "### Copying boost source"
MAKEFILEAM=$DEST_DIR/libs/Makefile.am
rm $MAKEFILEAM
# Signals
# copy source libraries
mkdir -p $DEST_DIR/libs/signals/src
cp $BOOST_SRC/libs/signals/src/* $DEST_DIR/libs/signals/src/.
echo -n "boost_sources += " >> $MAKEFILEAM
for f in `ls $DEST_DIR/libs/signals/src | fgrep .cpp`; do
echo -n "boost/libs/signals/src/$f " >> $MAKEFILEAM
done
echo >> $MAKEFILEAM
echo "### Extracting boost includes"
$BOOST_SRC/dist/bin/bcp --scan --boost=$BOOST_SRC ../src/*/*.[Ch] ../src/boost/libs/*/src/*.cpp ../src/smart_assert/smart_assert/priv/fwd/*.hpp $DEST_DIR
if [ $? != "0" ]; then
echo "### bcp failed"
rm -rf $DEST_DIR
exit 1;
fi
Have you considered just writing a build script for a build system like SCons?
You could write a python script to download boost, unpack it compile the needed files (you can even run bjam if needed) and compile your own code.
The only dependency your colleagues will need is Python and SCons.
Run the preprocessor on your code and save the output. If you started with one main.cpp with a bunch of includes in it, you will end up with one file where all of the includes have been sucked in. If you have multiple cpp files, you will have to concatinate them together and then run the preprocessor on the concatinated file, this should work as long as you don't have any duplicate global symbol names.
For a more portable method, do what sqlite does and write your own script to just combine and concatinate together the files you created+boost, and not get the system includes. See mksqlite3c.tcl in the sqlite code
http://www2.sqlite.org/src/finfo?name=tool/mksqlite3c.tcl
Why not just check in all the necessary files to SVN, and send you co-workers the URL of the repository? Then they can check out the code whenever they want to, do an 'svn up' any time they want to update to the latest version, etc.
If you're on a Debian-derived variety of Linux, well problems like this just shouldn't come up: let the packaging system and policy manual do the work. Just make it clear that the libboost-dev or whatever package is a build-dependency of your code and needs to be installed beforehand, and then /usr/include/boost should be right there where your code expects to find it. If you're using a more recent version of boost than the distro ships, it's probably worth figuring out how to package it yourself and work within the existing packaging/dependencies framework rather than reinventing another one.
I'm not familiar enough with .rpm based distros to comment on how things work there. But knowing I can easily setup exactly the build environment I need is, for me, one of the biggest advantages of Debian based development over Windows.