I make a Gradle build and when use ivy-publish plugin I generate jars and ivy.xml descriptor with the configurations and dependencies of my project, but what is the purpose of this file? Why is needed for runtime applications which use my build to know which jars to provide or..??
The ivy.xml file is the metadata file for your module.
An ivy file is divided into the following sections:
<publications>
An ivy module can publish multiple files. The purpose of this section is to
list them
</publications>
<configurations>
This section describes the various logical file groups that can exist in your
module
</configurations>
<dependencies>
This section describes the dependencies that might exist on other modules.
One can create mappings between different configurations here as well.
</dependencies>
At a minimum you need the publications section. It lists the files available. So if I make a request for "moduleA", the ivy client knows what files to download.
Configurations are optional and a difficult concept to initially grasp but they allow you to control sub-sets of the published files for download. For example you only want the binary files or source packages.
Finally dependencies. This enables an ivy client to not only download ModuleA's files, but files belonging to other modules that ModuleA depends on. Generally these are 3rd party libraries that might be required to compile the code, or need to be additionally be on your classpath at runtime. Ivy can do this for you automatically. (Again the contents of these classpaths can be customized using configurations: jars needed to "compile" at "runtime" or when "test"ing the code)
Related
I want to ease adoption of dependency management in my organization, and it does not seem to be as easy of a task with C/C++ as it is with Java.
I want to use an internal Artifactory repository (Maven, Ivy, Gradle or whatever is suitable) to essentially be able to download and publish external libraries which have been precompiled to then statically link them (catch: we are using a custom compiler for an embedded platform)
I have read what it seems to be the basic guide from Gradle's website, however there is no mention of external repositories:
https://docs.gradle.org/current/userguide/native_software.html
http://gradle.monochromeroad.com/docs/userguide/nativeBinaries.html
These two threads seem to touch on the subject, but I'm confused as to where the linking is happening and in what order:
https://discuss.gradle.org/t/external-dependencies-in-cpp-projects/7135
https://discuss.gradle.org/t/right-way-to-copy-contents-from-dependency-archives/7449
So far I cannot wrap my mind around some of the closures/syntax and correct usage of stuff like "configurations", "dependencies", "repositories" because they seem to be used in different ways.
With all that said, what would be a minimal example to get the following done?:
Go to Artifactory
Fetch a dependency (let's assume it is a .a or .o file)
Put that individual dependency in a specific location within the project
Build (specifying the linking order and compiler)
I want to make it easy for others to work on my repository. However, since some of the compiled dependencies are over 100mb in size, I cannot include them into the repository. Github rejects those files.
What is the best way to handle large binaries of dependencies? Building the libraries from source is not easy under Windows and takes hours. I don't want every developer to struggle with this process.
I've recently been working on using Ivy (http://ant.apache.org/ivy/) with C++ binaries. The basic idea is that you build the binaries for every build combination. You will then zip each build combination into a file with a name like mypackage-windows-vs12-x86-debug.zip. In your ivy.xml, you will associate each zip file with exactly one configuration (ex: windows-vs12-x86-debug). Then you publish this package of multiple zip files to an Ivy repo. You can either host the repo yourself or you can try to upload to an existing Ivy repo. You would create a package of zip files for each dependency, and the ivy.xml files will describe the dependency chain among all the packages.
Then, your developers must set up Ivy. In their ivy.xml files, they will list your package as a dependency, along with the configuration they need (ex: windows-vs12-x86-debug). They will also need to add an ivy resolve/retrieve step to their build. Ivy will download the zip files for your package and everything that your package depends on. Then they will need to set up unzip & move tasks in their builds to extract the binaries you are providing, and put them in places their build is expecting.
Ivy's a cool tool but it is definitely streamlined for Java and not for C++. When it's all set up, it's pretty great. However, in my experience as a person who is not really familiar with DevOps at all, integrating it into a C++ build has been challenging. I found that it was easiest to create simple ant tasks that do the required ivy actions, then use my "regular" build system (make) to call those ant tasks when needed.
So I should also mention that the reason I looked into using Ivy was that I was implementing this in a corporate environment where I couldn't change system files. If you and your developers can do that, you may be better off with a RPM/APT system. You'd set up a repo and get your developers to add your repo to the appropriate RPM/APT config file. Then they would run commands like sudo apt-get install mypackage and apt-get would do all the work of downloading and installing the right files in the right places. I don't know how this would work on Windows, maybe someone has created a windows RPM/APT client.
I am new to SVN and I want to commit a code to SVN using TortoiseSVN. I have C++ headers and source of the code, but I don't know how to organize the folders in an efficient way before uploading the version to SVN. Any suggestions about how people usually do? Is there any difference between the structure of codes for different languages, for example C++ or java. Should I follow any specific rules?
Update
So after checking the answers I made things a bit clearer. An usual folder structure is the following for one proyect:
/trunk
/branches
/tags
But I also found a similar structure that I liked a lot, which is:
/trunk #Keep it to developement mode always.
/samples #samples of use
/modules #software modules
/project_modName
/include # .hpp files
/src # .cpp files
/test #unitary tests
/branches #experimental developements (copies of trunk at various stages)
/tags #estable versions
/extras
/3rdparty #libs
/data #necessary data for developement
/doc #documentation
/resources #for window applications
At least I like it for multimedia applications code.
UPDATE 2
This update is just to explain how I am creating my repository. I created a folder called structure_svn. Inside I created the structure showned above. I right click on the parent folder and select import. In URL I write the folder path (file:///c:/svn_repos) so automatically the structure is created under svn_repos, without the folder structure_svn.
I want to remark this beacause the folder you right-click on to import will never appear. I just realized when I tried it, and also is explained on toturials.
The next step is to successfuly divide my code inside the created structure.
Here's how I structure my tree in a programming project (mainly from a C/C++ perspective):
/
src — Source and header files written by myself
ext — External dependencies; contains third-party libraries
libname-1.2.8
include — Headers
lib — Compiled lib files
Donwload.txt — Contains link to download the version used
ide — I store project files in here
vc10 — I arrange project files by IDE
bin — Compiled binaries go here
obj — The compiler's build files
gcc — If your project size justifies it, make a separate folder for each compiler's files
doc — Documentation of any kind
README
INSTALL
COPYING
makefile — Something to automate generation of IDE project files. I prefer CMake.
A few notes:
If I'm writing a library (and I'm using C/C++) I'm going to organize my source files first in two folders called "src/include" and "src/source" and then by module. If it's an application, then I'm going to organize them just by module (headers and sources will go in the same folder).
Files and directories that I listed above in italics I won't add to the code repository.
Edit: Note that I'm using Mercurial, not SVN, so the structure above it tailored for that version control system. Anyway, I see from your update that you already found one that you like.
One huge step forward is making sure all your projects do out-of-source builds, ie put temporary file in $TEMP and put all output file in a dedicated bin/lib directory. If done properly, this leaves you with source directories containing only source. What's in a name.. Apart from 'pure' source files also make sure that everything needed to build the source is in the repository: project files/generators, resources.
Once you got that in place correctly, there's a good chance you only have to put some typical project generated files (like *.suo for Visual Studio) into SVN's ignore list, and you're ready for commit.
Basically you can put in svn just what you want. The only standard you might consider to follow here is the standard repository layout: See here:
Within the project you are right that there exists several best practices. And they are different for each language. E.g a Java Package is organized by namespace. In the C++ world I have seen two main ways how to organize it:
Every Class into a header (.h) and a source file (.cpp) inside the same directory
Header and source is separated (so you have an folder especially for headers) This is usefull for libraries so that this path can be used by upper layer projects.
Then you need a folder for third party libs, another one for the target files and others such as build files or documentation.
You have a good explanation in the next Link if you are noob with svn!!
Imagine an overall project with several components:
basic
io
web
app-a
app-b
app-c
Now, let's say web depends on io which depends on basic, and all those things are in one repo and have a CMakeLists.txt to build them as shared libraries.
How should I set things up so that I can build the three apps, if each of them is optional and may not be present at build time?
One idea is to have an empty "apps" directory in the main repo and we can clone whichever app repos we want into that. Our main CMakeLists.txt file can use GLOB to find all the app directories and build them (not knowing in advance how many there will be). Issues with this approach include:
Apparently CMake doesn't re-glob when you just say make, so if you add a new app you must run cmake again.
It imposes a specific structure on the person doing the build.
It's not obvious how one could make two clones of a single app and build them both separately against the same library build.
The general concept is like a traditional recursive CMake project, but where the lower-level modules don't necessarily know in advance which higher-level ones will be using them. Yet, I don't want to require the user to install the lower-level libraries in a fixed location (e.g. /usr/local/lib). I do however want a single invocation of make to notice changed dependencies across the entire project, so that if I'm building an app but have changed one of the low-level libraries, everything will recompile appropriately.
My first thought was to use the CMake import/export target feature.
Have a CMakeLists.txt for basic, io and web and one CMakeLists.txt that references those. You could then use the CMake export feature to export those targets and the application projects could then import the CMake targets.
When you build the library project first the application projects should be able to find the compiled libraries automatically (without the libraries having to be installed to /usr/local/lib) otherwise one can always set up the proper CMake variable to indicate the correct directory.
When doing it this way a make in the application project won't do a make in the library project, you would have to take care of this yourself.
Have multiple CMakeLists.txt.
Many open-source projects take this appraoch (LibOpenJPEG, LibPNG, poppler &etc). Take a look at their CMakeLists.txt to find out how they've done this.
Basically allowing you to just toggle features as required.
I see two additional approaches. One is to simply have basic, io, and web be submodules of each app. Yes, there is duplication of code and wasted disk space, but it is very simple to implement and guarantees that different compiler settings for each app will not interfere with each other across the shared libraries. I suppose this makes the libraries not be shared anymore, but maybe that doesn't need to be a big deal in 2011. RAM and disk have gotten cheaper, but engineering time has not, and sharing of source is arguably more portable than sharing of binaries.
Another approach is to have the layout specified in the question, and have CMakeLists.txt files in each subdirectory. The CMakeLists.txt files in basic, io, and web generate standalone shared libraries. The CMakeLists.txt files in each app directory pull in each shared library with the add_subdirectory() command. You could then pull down all the library directories and whichever app(s) you wanted and initiate the build from within each app directory.
You can use ADD_SUBDIRECTORY for this!
https://cmake.org/cmake/help/v3.11/command/add_subdirectory.html
I ended up doing what I outlined in my question, which is to check in an empty directory (containing a .gitignore file which ignores everything) and tell CMake to GLOB any directories (which are put in there by the user). Then I can just say cmake myrootdir and it does find all the various components. This works more or less OK. It does have some side drawbacks though, such as that some third-party tools like BuildBot expect a more traditional project structure which makes integrating other tools with this sort of arrangement a little more work.
The CMake BASIS tool provides utilities where you can create independent modules of a project and selectively enable and disable them using the ccmake command.
Full disclosure: I'm a developer for the project.
Is there some way to externalize the paths of libraries that are used in the compilation process on Visual Studio 2008? Like, *.properties files?
My goal is to define "variables" referencing locations to headers files and libraries, like *.properties files are used in the Ant build system for Java.
I think you're looking for .vsprops files. They're comparable to the *.properties files.
Environment Variables?
All the $(xyz) replacements allowed in the propertier are, and you are allowed to "bring your own".
They are normally inherited from the parent process, so you can set them
for the machine / user in the system settings (usually inherited via explorer)
in a batch file that sets them before running devenv.exe
in an addin like SolutionBuildEnvironment to read them from a project file
I don't know how Ant works, but for your static libraries and headers you can edit the .vcproj file. These are XML files in fact. Libraries go in the VCLinkerTool tool tag, in AdditionalDependencies
<Tool
Name="VCLinkerTool"
AdditionalOptions=" /subsystem:windowsce,5.01"
AdditionalDependencies="iphlpapi.lib commctrl.lib coredll.lib"
/>
Additional header paths are defined in the VCCLCompilerTool tool tag, in AdditionalIncludeDirectories
<Tool
Name="VCCLCompilerTool"
Optimization="0"
AdditionalIncludeDirectories="dev\mydir"
PreprocessorDefinitions="WIN32;_DEBUG;_CONSOLE"
/>
Be careful, there is one such section for each build configuration.
Is this what you are looking for?
Edit : the .vsprops suggested by MSalters are more powerful; you can define additional dependencies and libraries in them an make your projects inherit these properties. Well I learned something useful today!
If you're referring to influencing the location of #includes, Project properties|Configuration Properties|C/C++/Additional Include Directories is the ticket. There is also project properties|Common Properties|Additional reference search paths.
If your question is how do I parameterize stuff in a VCProj file like I would in Ant, the answer is that in VS2010 VC projects are[/can?] be MSBuild-based whereas VS2008 vcproj files are a proprietary XML based format [but as the other answers say, they have an analogous properties capability].
In the absence of more info, I'm pretty sure the standard approach for what you're doing is to add your search paths a la the first or second paragraph.
You can use a build system like CMake. You give CMake a high-level description of your project, and it spits out the necessary files to get your project to build correctly via another tool (e.g. Visual Studio's IDE, or a Unix-style makefile).
Paths: You can use CMake's INCLUDE_DIRECTORIES() and LINK_DIRECTORIES() commands in the CMakeList.txt configuration file to specify these paths. CMake has variables which describe both aspects of your environment (many of which can be autodiscovered, e.g. CMAKE_C_COMPILER which is the command to run your C compiler) plus any options you wish to allow the user to specify directly. All variables are stored in a separate plain text configuration file, CMakeCache.txt, that can be edited in a text editor or using a special GUI configuration tool.
CMake has many other features, like the ability to autodiscover the locations of many useful libraries, and to produce customised source/header files from "template" files containing CMake directives using the CONFIGURE_FILE() command.
Advantages:
Highly portable across common environments (e.g. it can produce solution files for several versions of MS Visual C++, as well as makefiles for Unix (e.g. Linux) systems).
Used by several large multiplatform projects (e.g. KDE)
Very simple to set up simple projects
I've found the dependency checking system to be rock-solid -- e.g. it knows to rebuild if you change compiler options (unlike naive use of make for example)
Disadvantages:
Ugly, primitive syntax
Documentation quality varies (e.g. it's sometimes hard to tell exactly what properties affect any given object)
Some time investment involved