How to include JBOSS modules in compile path - build

After applying patches to JBOSS 6.4 to bring it to 6.4.14, my build breaks.
I am using the various modules in the build path using **/*.jar. (See below for code)
RedHat disables old versions of jars when applying patches red hat doc
This is by design.
When applying a patch to EAP 6.2.0 to build EAP 6.2.x , the Patch tool does not replace the existing files. It will place new files under the folder $JBOSS_HOME/modules/system/layers/base/.overlays/_ and cripple the original files by flipping a bit in the end of central directory record to prevent them from being used.
How can I include all the jars in the modules directory except for the disabled ones?
Relevant portion of my build.xml file.
<property name="jbossmodules" value="${env.JBOSS_HOME}/modules" />
<path id="class.path">
<pathelement path="${class.lib}" />
<pathelement path="${java.class.path}" />
[...]
<fileset dir="${jbossmodules}">
<include name="**/*.jar" />
</fileset>

I worked with Redhat support. If one has the appropriate user type, one can download a directory of jars to compile against. In short Redhat seems to discourage compiling against a live patched version of JBOSS. But neither do they make it easy to get a set of jars one can compile against.

Related

mlpack include file errors

Recently I am about to learn mlpack. Today I have successfully built the solution from mlpack source code, but when I newly create a project I get the following error in the header file. I would like to know what is wrong and how to fix it.
errors
In the screenshot, the algorithm.hpp is under the build folder and its absolute path is D:\MLPack\mlpack\build\include\mlpack\core\std_backport\algorithm.hpp. The source code in the new project is just a copy from https://www.mlpack.org/.
The screenshot below shows some of the files generated after building solution of mlpack.sln.
generated libs
The versions of other libraries to help build the mlpack are :
Armadillo 10.8.0 (at least 9.800)
Boost (math_c99, spirit) 1.78.0 (at least 1.58.0, and I have added this version string in CMakeLists.txt before building mlpack)
CMake 3.20 (at least 3.6)
ensmallen 2.18.1 (at least 2.10.0)
cereal 1.3.0 (at least 1.1.2)
openBLAS 0.24.1
The configurations of my new project are shown below.
additional include directories
additional dependencies
post-build event
And I have also disabled "Conformance Mode".
disabled conformance mode
The entire building and using process refer to https://www.mlpack.org/doc/stable/doxygen/build_windows.html and https://www.mlpack.org/doc/mlpack-3.4.2/doxygen/sample_ml_app.html.
I finally found out that this problem seems to be related to the version of the source code. I should not use the latest version of the source code from https://github.com/mlpack/mlpack, but the source code corresponding to the latest stable version. After I replaced the include directory with the include directory corresponding to the officially released windows installation package, no error was reported during the building of the solution in my new project, so I got the expected result.
the result
This incident taught me a lesson that I should use the stable rather than the latest version of the source code when doing CMake in the future.

Compiler/Linker is not working with .manifest files

Some background:
I work with the c++ language in the Code::Blocks IDE on Windows 10. Usually, when I install Code::Blocks I use the mingw-setup build so I can just get right into programming. Due to this I have no experience with setting up compilers and whatnot. Recently a friend asked me to make a program for him so i took him up on it and decided to start working. When it came the time to build it didnt work and I learned that the compiler was outdated so I moved to a new one.
The Problem:
When I tried to make the program for my friend require administrator perms with the new compiler, I made my normal manifest file. The program built and all was good but I realized that my program was not requiring administrator permissions. I went to my old compiler to see if I had messed something up and I had not, the new compiler just didn't work with the manifest file.
What I Previously Had Been Able To Do:
Previously, before I changed my compiler, I was able to put the name.exe.manifest file into the same folder as the name.exe file, build it, then I was golden. Now, even when putting it into the folder it doesnt work.
Questions:
Are there any MSYS2 packages I can use to fix this?
What exactly is the problem?
How can I prevent this in the future?
Besides MSYS2 packages, how else can I fix this?
Some files from my custom build are missing in comparison to the Code::Blocks one, If I copy them over to my build, will it work?
What I Tried:
I have tried putting the manifest file in several different folders
I have tried changing the linker settings to include "-o Dir/SubDir/Manifest"
I have tried having the linker search different directories
I have tried researching the problem and asking around different places. (My conclusion was that it was a problem with a compiler/linker or I didnt have a file I needed.
My Maifest File Code:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<assemblyIdentity version="1.0.0.0"
processorArchitecture="X86"
name="NAME HERE"
type="win32"/>
<description>DESCRIPTION HERE</description>
<!-- Identify the application security requirements. -->
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v2">
<security>
<requestedPrivileges>
<requestedExecutionLevel
level="requireAdministrator"
uiAccess="false"/>
</requestedPrivileges>
</security>
</trustInfo>
</assembly>
Mingw Compiler (Code::Blocks):
Image: (In MingW: The Normal Code::Blocks Build)
Image: (In MingW: The Normal Code::Blocks Build) > Bin Folder
MSYS2 Compiler (Custom Compiler):
Image: MSYS2 Mingw64: Custom Build
Image: MSYS2 Mingw64: Custom Build > Bin folder*
*If you look, the normal Code::Blocks Mingw compiler's bin directory pales in comparison to my build's.
==========================================================================
I will be happy to answer any questions you have!

Installing Boost v1.70 in Visual Studio 2019 using Nuget

I'm learning C++, some of the Boost libraries and VS2019 Community Edition. I'm currently reading through the Boost website's online material and the book Learning Boost C++ Libraries, trying to follow along. I would like to update to 1.70.0 and figure out exactly why my code is building correctly. I know, I know...if it's working why question it? Well, the truth is I just don't understand why!
I wasn't aware of Nuget and vcpkg prior to downloading and installing Boost 1.68.0 manually (BTW, there seems to be way too many ways of installing the libraries and it's quite confusing). I have since deleted the original Boost installation directory and tried to install the Boost libraries through Nuget in VS2019. This didn't appear to be successful (although I suspect vcpkg (see below) has something to do with it). I was getting a single linker error (can't find the .lib file) which I eventually resolved (don't ask me how...it's a confusing story involving creating a new project and cutting/pasting my code. Now it works; go figure).
Currently, when I begin an #include directive () in my code I can see the path to the files which is buried under D:\...\vcpkg\installed\x86-windows\include\boost. I've never used vcpkg directly so I have no idea why it's there. The Property Pages for the project don't list the paths under C/C++ > Additional Include Directories or under Linker > Additional Library Directories so I haven't a clue from where the compiler and linker are getting the references. There appear to be no packages installed under the Nuget UI.
Ideally, I would like to start over with the Boost installation and use VS internal tools to do so. I will probably have several different VS solutions as I explore Boost and would prefer Boost to be available to all future projects. Is that possible?
Any advice?
One thing to keep in mind is that the "boost" package only installs the header only libraries, it doesn't install all the libraries that require a binary library.
To install the binary libraries, you need to install individual packages, for instance "boost_log-vc141" is the boost logging library.
First, install the boost package into your project using NuGet. You should see a packages.config added to your project that looks like this:
<?xml version="1.0" encoding="utf-8"?>
<packages>
<package id="boost" version="1.70.0.0" targetFramework="native" />
</packages>
Next, include the desired boost header file:
#include <boost\array.hpp>
You can confirm that the header is being loaded from the correct path by placing the caret after the hpp, pressing CTRL+SPACE, then hovering hovering over the item in the context list:

Specify location of static libraries for C++ application in Linux

first of all, I hope that I ask the question in the right context here...
I build an application in C++ with Code::Blocks. The application uses static libraries that are provided by a third party and cannot be installed on a system via the package management. Therefore I ship these libraries when I distribute my application.
Here is what my target configuration looks like:
<Target title="Unix_162">
<Option output="bin/my_app" prefix_auto="1" extension_auto="1" />
<Option working_dir="/home/marco/third_party_dist/lib" />
<Option object_output="obj/Unix_162" />
<Option type="1" />
<Option compiler="gcc" />
<Option use_console_runner="0" />
<Option parameters="-c" />
<Compiler>
<Add directory="/home/marco/third_party_dist/include" />
</Compiler>
<Linker>
<Add library="/home/marco/third_party_dist/lib/lib1.so" />
<Add library="/home/marco/third_party_dist/lib/lib2.so" />
<!-- some more included the same way -->
<Add directory="/home/marco/third_party_dist/lib" />
</Linker>
</Target>
I can build this target fine and run it. Everything works.
Today, I tried to run in on Debian Squeeze and just copied a folder which contained both the executable and the libraries from the third party. I thought that as long as everything is in one folder the executable will find the .so files. I was wrong. I get the message:
/home/my_app/my_app: error while loading shared libraries: lib1.so: cannot open shared object file: No such file or directory
I don't get this message on my developement machine because Code::Blocks is able to set a working directory for the executable. I could remove the error message by putting the location of the .so files inside /etc/ld.so.conf.d/my_app.conf...
Is there anyway I can build the executable so it searches the libs in the execution directory? Or this is a problem specific for Debian? Or can I specify the working directory for the process before I execute the executable?
I want to avoid changing the systems configuration / environment before you can start the application...
First point these are not static libraries (they are shared).
So the problem is locating the libraries at runtime.
There are a couple of ways of doing this:
1) set the LD_LIBRARY_PATH environment variable.
This is like PATH but for shared libraries.
2) set the rpath in the executable.
This is a path backed into the executable where is searches for shared libs
-Wl,-rpath,<LIB_INSTALL_PATH>
This can be set to . which will make it look in the current directory.
or you can set to '$ORIGIN' which will make it look in the directory the application is installed in.
3) You can install them into one of the default locations for shared libraries.
Look inside /etc/ld.so.conf but usually /usr/lib and /usr/local/lib
4) You can add more default locations
Modify /etc/ld.so.conf
Yes there is, you have to pass option -rpath <path> to your linker where <path> is the path of your library (similar to option -L).
Also, you are probably talking about shared libraries, not static ones.
I thought that as long as everything is in one folder the executable will find the .so files. I was wrong.
An extra step is required to make Linux dynamic linker look for shared libraries in the same directory as the executable. Link the executable with -Wl,-rpath,'$ORIGIN' option (in the makefile $ needs to be quoted like -Wl,-rpath,'$$ORIGIN'). See $ORIGIN and rpath note for more details.

Determining Clojure Jar Path

The point of this question is to clear up confusion about Clojure project.clj dependencies and how to specify a local dependency.
I have a bunch of Clojure lein projects in a tree
./projects/clojure/bene-csv # A csv parsing library
./projects/clojure/bene-cmp # A main program that depends on bene-csv
I'm editing bene-cmp's project.clj file. I want to make a dependency to
./projects/clojure/bene-csv/bene-csv-1.0.0-SN.jar .
Do I use simple directory notation to specify the path or something else
Thank you.
Edit:
I can include bene-csv in my project by entering lein install in the bene-csv project directory, and using these project.clj entries in bene-cmp's project directory's project.clj file:
(defproject bene-cmp "1.0.0-SN"
:description "This is the main benetrak/GIC comparison program."
:dependencies [[org.clojure/clojure "1.3.0"]
[clojure-csv/clojure-csv "1.3.2"]
[bene-csv "1.0.0-SN"]])
However, I am still trying to figure out what the path is, and would appreciate any pointers or help along those lines. Thank You.
Leinigen uses maven dependency management under the covers, so all dependencies get installed in
${HOME}/.m2/repository/${groupId-as-path}/${artifactId}/$[version}/${artifactId}-${version}.jar
where for [org.clojure/clojure "1.3.0"] groupId is org.clojure, artifactId is clojure and version is 1.3.0. groupIds are converted to paths, so a groupId of org.clojure has a path of org/clojure.
In a maven dependency, specified in pom.xml, this would look like:
<project>
...
<dependencies>
<dependency>
<groupId>org.clojure</groupId>
<artifactId>clojure</artifactId>
<version>1.3.0</version>
</dependency>
</dependencies>
...
</project>
Note - If no groupId is specified then leiningen uses same value for both the groupId and artifactId.
The benefit of using maven dependency management is that it handles transitive dependencies for you, ie. if you specify a dependency on something, you get all the things it depends on and all the thing that those things depend on etc etc.
So to depend on a local project, the correct thing is to install the local project in your local repository.
To save you changing your versions endlessly while in a development phase, maven supports SNAPSHOT dependencies, whereby some extra information is appended to the version (the datetime basically) and maven knows, that for say 1.3.1-SNAPSHOT it should look for the latest version of that snapshot. This is triggered by the naming convention of {version}-SNAPSHOT.
You can, in maven, specify system dependencies with a hard coded path but generally that's bad practice - it's usually used for things that are platform dependent, i.e. may have a native library component.
By default maven central repository is searched, and leinigen adds in the clojars repository, which serves as a central repo for clojure jars.
leinigen uses this stuff under the covers and builds a classpath referring to the jars in your local maven repository.
Note that you can generate a pom.xml from a leinigen project with lein pom. You could then drive maven from that. A useful feature is
mvn dependency:tree
which gives an ascii art represenation of all the dependencies.