I'm working on webservices for WebSphere and I wish to not depend anymore from the Rational Software Delipvery Platform (aka RAD) IDE.
I'm asking if someone knows if it is possible to generate the following files:
ibm-webservices-ext.xmi
ibm-webservices-bnd.xmi
webservices.xml
without having to use RAD (eg some ant script or WebSphere batch).
This is a really annoying lock-in.
I'm trying to port these webservices projects to a more controllable development process, using maven, automatic builds, and so on, but i found it quite difficult.
Has someone solved similar issues?
If anyone is still looking for help with this, we took a slightly different approach by creating the RAD and WAS 8.5 specific files at project creation time.
For my current project, we have a fairly standard project structure and naming convention so we use a Maven archetype to create our projects and include those IBM specific files, ibm-webservices-bnd.xmi in particular, in the Maven archetype.
Easiest way to do this is to take an existing project that has those necessary files, and use the create-from-project archetype from your project folder:
mvn clean archetype:create-from-project -Dinteractive=true
Use interactive mode to give the archetype a sensible archetype.artifactId (but do not change the GAV of the project):
Define value for archetype.groupId: com.name.archgroup: : com.name.common.archetype
Define value for archetype.artifactId: MyService-archetype: : service-archetype-0.8
Define value for archetype.version: 1.0-SNAPSHOT: :
Define value for groupId: com.name.archgroup: :
Define value for artifactId: MyService: :
Define value for version: 1.0-SNAPSHOT: :
Define value for package: com.name: : com.name.common.archetype
This gets you most of the way, but the IBM files do not get processed by default. The trick then is to modify the generated target files in /MyService/target/generated-sources/archetype/target/classes/archetype-resources to also modify the IBM files. Replace instances of the old project name and package name with ${rootArtifactId} and ${groupId} keeping track of which files had the incorrect values.
Then modify the /MyService/target/generated-sources/archetype/target/classes/META-INF/maven/archetype-metadata.xml to include the files you had to manually change in the filtering. For instance, under my EJB module section, *.xmi was included but not filtered. Move the include to the filtered fileset:
<fileSet filtered="true" encoding="UTF-8">
<directory>src/main/resources</directory>
<includes>
<include>**/*.xml</include>
<include>**/*.properties</include>
<include>**/*.xmi</include>
</includes>
</fileSet>
You'll need to do this for everything that you modified to include a ${rootArtifactId} or ${groupId} so that velocity processes it in the next step:
cd target\generated-sources\archetype
mvn install
This packages up your changes and places the jar into your local repository so that you can test it out before publishing to your Maven repository server.
Once your are satisfied, add your maven repositories to target/generated-sources/archetype/pom.xml and run
mvn deploy
And instruct developers to begin using the archetype to create your mavenized projects.
Note: our ibm-webservices-bnd.xmi files appear to include something like xmi:id="RouterModule_112345678901234"
We remove this value before the mvn install as it appears to be project specific.
Related
I work with a massive codebase distributed across many repositories and using even more third-party dependencies. The goal is to make the build hermetic and I contemplate using Bazel to achieve it. On the one hand, Bazel has git_repository rule to refer to the external repos in the WORKSPACE file. On the other hand, WORKSPACE files are not loaded recursively, so to get to indirect dependencies I need to build all inclusive WORKSPACE file somehow. I wonder if somebody already tackled that problem using Bazel or some other existing tools. Is there a way to expand the WORKSPACE as part of the build? May be WORKSPACE can #include other (generated) files?
WORKSPACE files can load and then call macros, which gives similar functionality to #include.
A common pattern is each project having a macro which calls macros (for dependencies on other projects) and creates *_archive rules (for dependencies directly on files to download) so it builds. For example, protobuf has protobuf_deps to implement this pattern. If you create a repository with protobuf (using git_repository, or http_archive, or any of the other repository rules), then you can load that macro and call it, and you'll automatically get all the transitive dependencies.
For example (from Chromium):
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
# This com_google_protobuf repository is required for proto_library rule.
# It provides the protocol compiler binary (i.e., protoc).
http_archive(
name = "com_google_protobuf",
strip_prefix = "protobuf-master",
urls = ["https://github.com/protocolbuffers/protobuf/archive/master.zip"],
)
load("#com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")
protobuf_deps()
I'm showing http_archive because it's easier to work with, but you can easily change it to git_archive if you want.
Another common pattern which makes this all work is the way protobuf_deps checks native.existing_rule before creating each http_archive. That allows you to instantiate a specific version (or from a specific source, etc) of the dependency directly in your WORKSPACE file to override the one protobuf would otherwise bring in.
I am trying to write a Dropwizard application and its doc tells me that I need to ship everything as an uber jar.
However, in my application I need to support multiple databases and this requires multiple database JDBC driver jars in my classpath, all of which are not expected to be shipped together with my application. Users are expected to place the corresponding JDBC jar like mysql-connector-java-5.1.39.jar in a particular folder by their own.
After reading Dropwizard's documentation I am not sure if this kind of usage is supported. Does anyone have experience making it to work this way?
Since java 6, you can wildcard classpaths.
Using the application plugin, the generated bin folder will have a start script that contains the classpath. What we want to do, is to instead of listing every possible jar in the bin folder, we simply include all of them.
Note: You can also do the same thing with different folders if you want the classpath in a different location.
This can be achieved (in a workaround manner since there are problems with this plugin in my version) in the easiest way as follows. In build.gradle you do:
startScripts {
doLast {
def windowsScriptFile = file getWindowsScript()
def unixScriptFile = file getUnixScript()
windowsScriptFile.text = windowsScriptFile.text.replaceAll('CLASSPATH=.*', 'CLASSPATH=\\$APP_HOME/lib/*')
unixScriptFile.text = unixScriptFile.text.replaceAll('CLASSPATH=.*', 'CLASSPATH=\\$APP_HOME/lib/*')
}
}
This will wildcard your lib folder in the start scripts. When starting up, your classpath will simply be
lib/*
When you drop jars into that folder, they will automatically be picked up (on startup, not on runtime).
I hope this helps,
Artur
When creating a vNext build on TFS 2015 you can define variables, which are then used in build steps, and can also be used as environment variables in scripts the build runs.
The build I am working on runs scripts that pulls files from mapped locations, so it would be great if I could define a variable and use it in a mapping so that for example, if I update a reference in the project the build is building, I can simply update the variable with the new location and have the repository mappings and scripts all pull correctly from the new location without having to make the change in multiple places.
I have tried doing this by setting up the variable and mapping as follows,
But this generates an error when you try to save the build complaining that there are two '$' characters in the mapping. Is there way to do this or is this not currently possible?
This has been causing me havok for quite a while as well.
For starters, there is a uservoice request for this feature. You can add your votes and input here to get Microsoft to allow this feature: https://visualstudio.uservoice.com/forums/330519-team-services/suggestions/14131002-allow-variables-in-repository-variables-and-trigg
Second, we've developed a workaround that gets us most of the way there. It's not perfect, but it might be useful to you if you're comfortable with the tradeoffs or can work around the deficiencies.
Start by turning off the "Label Sources" option of the build and mapping the Server Path field to you base build. You'll want to add a custom variable to the Build Definition to tell the build instance what TFS location to pull from. For example, we have a base project and then multiple branches from the project, so our source is structured like this
$\Team Project\Project1
$\Team Project\Project1Branch1
$\Team Project\Project1Branch2
$\Team Project\Project1Branch3
and we create a variable named "Branch" that we can set to "Branch1", "Branch2", and so forth.
When we want to build the base project, we leave the Branch variable blank when launching the build. For branch builds, we set it to the name of the branch we want to build.
Then our build steps look like this
Remap Workspace Folder to Branch Folder
Get Files for Specified Branch - We have to do this manually after
remapping our workspace
Compile the Source in the Specified Branch
Publish Build Artifacts from the Specified Branch
Label the Code of the Specified Branch Manually
The Remap task runs the command
tf workfold "$/Team Project/Project1$(Branch)" "$(build.sourcesDirectory)\$(Build.DefinitionName)$(Branch)"
The Manual Get task runs the following command
get /recursive /noprompt "$/Team Project/Project1$(Branch)"
The build uses the Branch variable to point to the correct location of the solution file for the specified branch
$(build.sourcesDirectory)\$(Build.DefinitionName)$(Branch)\SolutionFile.sln
The Publish Artifacts task uses the Branch variable in both the Contents field and the Path field
Example in Contents
**\$(Build.DefinitionName)$(Branch)\bin
The Label Code task uses the following command
tf label "$(build.buildNumber)" "$/Team Project/Project1$(Branch)" /recursive
The downside of this setup is that you don't capture Associated Changes and Work Items to your subsidiary branches as the Server Path field is always set to the main location. This may not be an issue if you always merge from your branches to your main location prior to launching a build meant to go to production. What you can do to compensate for this really depends on your use case.
With some tweaking, you could use this same format to specify full paths as well if you needed to.
It's impossible. Just as the error message mentioned: there are two '$' characters in the mapping. Which means your application's path shouldn't vary from build to build.
Mappings on the Repository page are used to specify source control
folder which contains projects that need to be built in the build
definition. You can set it via clicking the Ellipsis (...) button,
however, you can't include variables in the mapping path.
There is a similar question: Variables in TFS Mappings on Visual Studio Online Team Builds
I've cloned the OpenLayers 3 repo and merged the latest from master. There exists a recently merged pull request that I'm interested in exploring, but I'm not sure how to create a regular old comprehensive, non-minified build.
Does anyone know how to create a non-minified, kitchen sink (everything included) build for OpenLayers?
(similar to ol-debug.js).
You can use the ol-debug.json config to concatenate all sources for the library without any minification.
node tasks/build.js config/ol-debug.json ol-debug.js
Where the ol-debug.json looks like this:
{
"exports": ["*"],
"umd": true
}
The build.js task generates builds of the library given a JSON config files. The custom build tutorial describes how this can be used to create minified profiles of the library. For a debug build, you can simply omit the compile member of the build config. This is described in the task readme:
If the compile object is not provided, the build task will generate a "debug" build of the library without any variable naming or other minification. This is suitable for development or debugging purposes, but should not be used in production.
So I am running into an issue when I go to build my projects using tfs build controller using the Output location "AsConfigred" it will not detect my unit tests. Let me give a little info on my setup.
TFS 2013 Update 2, Default Process Template
Here is a few screenshots that can hopefully help fill in what I can't in typing. I am copying my build out to a file share on our network so that we can use other utilities use the output. I don't want to use "PerProject" or "SingleFolder" because they mess up the file structure we have configured (These both will run the tests). So i have the files copy to folder names "SingleOutputFolder" which is a child of the DropLocation. I would like to be able to run from the drop folder or run from the bin folder for each of my tests (I don't care which). However it doesn't seem to detect/run ANY of the tests. Any help would be greatly appreciated. Please let me know if you need any additional information.
I have tried using ***test*.dll, Install\SingleFolderOutput**.test.dll, and $(TF_BUILD_DROPLOCATION)\Install\SingleFolderOutput*test*.dll
But I am not sure what variables are available and understand where the scope of its execution is.
Given that you're using Build Output location set to AsConfigured you have to change the default values of the Test sources spec setting to allow build to find the test libraries in the bin folders. Here's an example.
If the full path to the unit test libraries is:
E:\Builds\7\<TFS Team Project>\<Build Definition>\src\<Unit Test Project>\bin\Release\*test*.dll
use
..\src\*UnitTest*\bin\*\*test*.dll;
This question was asked on MSDN forums here.
MSDN Forums Suggested Workaround
The suggested workaround in the accepted answer (as of 8 a.m. on June 20) is to specify the full path to the test projects' binary folders: For example:
C:\Builds\{agentId}\{teamProjectName}\{buildDefinitionName}\src\{solutionName}\{testProjectName}\bin*\Debug\*test*.dll*
which really should have been shown as
{agentWorkingFolder}\src\{relativePathToTestProjectBinariesFolder}\*test*.dll
However this approach is very brittle, for the following reasons:
Any new test projects you add to the solution will not be executed until you add them to the build definition's list of test sources:
It will break under any of the following circumstances:
the build definition is renamed
the working folder in build agent properties is modified
you have multiple build agents, and a different agent than the one you specified in {id} runs the build
Improved Workaround
My workaround mitigates the issues listed in #2 (can't do anything about #1).
In the path specified above, replace the initial part:
{agentWorkingFolder}
with
..
so you have
..\src\{relativePathToTestProjectBinariesFolder}\*test*.dll
This works because the internal working directory is apparently the \binaries\ folder that is a sibling of the \src\ folder. Navigating up to the parent folder (whatever it is named, we don't care) and back in to \src\ before specifying the path to the test projects binaries does the trick.
Note: If you have multiple test projects, you add additional entries, separated with semicolons:
..\src\{relativePathToTestProjectONEBinariesFolder}\*test*.dll;..\src\{relativePathToTestProjectTWOBinariesFolder}\*test*.dll;..\src\{relativePathToTestProjectTHREEBinariesFolder}\*test*.dll;
What I ended up doing was adding a post build event to copy all of the test.dll into the staging location folder in the specific build that is basically equivalent to where it would go on a SingleFolder build and do that on each test project.
if "$(TeamBuildOutDir)" == "" (
echo "Building Interactively not in TFS"
) else (
echo "Building in TFS"
xcopy "$(TargetDir)*.*" "$(TeamBuildBinaries)\" /Y /E /S
)
MSBUILD parameter in the build def that told it to basically drop in the folder that TFS looks for them.
/p:TeamBuildBinaries="$(TF_BUILD_BINARIESDIRECTORY)"
Kept the default Test assembly file specification:
**\*test*.dll
View this link for the information on the variable that I used and what relative path it exists at.
Another solution is to do the reverse.
Leave all of the files in the root so that all of the built in functionality works. There is more than just test execution in there. What about static code analysis, impact analysis..among others. You would have to do something custom for them all.
Instead use a pre-drop powershell script to create your Install arrangement from the root files.
If it is an application then you can use the _ApplicationFolder Nuget package to create an _PublishApplications folder same as you get for web applications.