What is the difference between Clone and Import in Build definition? - build

I see an option to import a build definition from a json file and create the definition.
Also there is an option to clone a build definition from the existing definition.
so what is the difference between the above in VSTS?

Clone - VSTS build definition we have useful option "Clone.." which allows duplicate entire definition. It will be more useful to clone build step within saved build definition. Clone allows you to take the existing one and make a copy to make further modifications. Clone step can be done directly by CTRL+Click and drag/drop that task/step.
Import - Import option build definitions is useful for recreating all the build steps, variables, schedules, etc in a same or different team project. i. e import allows you to get it from another project or somewhere outside

Cloning makes a direct copy within the same team project, all within the UI.
Export/import allows you to export/import the raw JSON file that defines the build, which is useful if you want to share the build definition across team projects or accounts, if you want to do big changes that are difficult to do via the UI, or even if you want to script or otherwise automate changing or creating build definitions.

Related

Is there a way to make Bazel work with transitive repositories?

I work with a massive codebase distributed across many repositories and using even more third-party dependencies. The goal is to make the build hermetic and I contemplate using Bazel to achieve it. On the one hand, Bazel has git_repository rule to refer to the external repos in the WORKSPACE file. On the other hand, WORKSPACE files are not loaded recursively, so to get to indirect dependencies I need to build all inclusive WORKSPACE file somehow. I wonder if somebody already tackled that problem using Bazel or some other existing tools. Is there a way to expand the WORKSPACE as part of the build? May be WORKSPACE can #include other (generated) files?
WORKSPACE files can load and then call macros, which gives similar functionality to #include.
A common pattern is each project having a macro which calls macros (for dependencies on other projects) and creates *_archive rules (for dependencies directly on files to download) so it builds. For example, protobuf has protobuf_deps to implement this pattern. If you create a repository with protobuf (using git_repository, or http_archive, or any of the other repository rules), then you can load that macro and call it, and you'll automatically get all the transitive dependencies.
For example (from Chromium):
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
# This com_google_protobuf repository is required for proto_library rule.
# It provides the protocol compiler binary (i.e., protoc).
http_archive(
name = "com_google_protobuf",
strip_prefix = "protobuf-master",
urls = ["https://github.com/protocolbuffers/protobuf/archive/master.zip"],
)
load("#com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")
protobuf_deps()
I'm showing http_archive because it's easier to work with, but you can easily change it to git_archive if you want.
Another common pattern which makes this all work is the way protobuf_deps checks native.existing_rule before creating each http_archive. That allows you to instantiate a specific version (or from a specific source, etc) of the dependency directly in your WORKSPACE file to override the one protobuf would otherwise bring in.

julia: rerun unittests upon changes to files

Are there julia libraries that can run unittests automatically when I make changes to the code?
In Python there is the pytest.xdist library which can run unittests again when you make changes to the code. Does julia have a similar library?
A simple solution could be made using the standard library module FileWatching; specifically FileWatching.watch_file. Despite the name, it can be used with directories as well. When something happens to the directory (e.g., you save a new version of a file in it), it returns an object with a field, changed, which is true if the directory has changed. You could of course combine this with Glob to instead watch a set of source files.
You could have a separate Julia process running, with the project's environment active, and use something like:
julia> import Pkg; import FileWatching: watch_file
julia> while true
event = watch_file("src")
if event.changed
try
Pkg.pkg"test"
catch err
#warn("Error during testing:\n$err")
end
end
end
More sophisticated implementations are possible; with the above you would need to interrupt the loop with Ctrl-C to break out. But this does work for me and happily reruns tests whenever I save a file.
If you use a Github repository, there are ways to set up Travis or Appveyor to do this. This is the testing method used by many of the registered modules for Julia. You will need to write the unit test suite (with using Test) and place it in a /test subdirectory on the github repository. You can search for julia and those web services for details.
Use a standard GNU Makefile and call it from various places depending on your use-case
Your .juliarc if you want to check for tests on startup.
Cron if you want them checked regularly
Inside your module's init function to check every time a module is loaded.
Since GNU makefiles detect changes automatically, calls to make will be silently ignored in the absence of changes.

How to build same library more than once in Yocto?

I have 2 application, both uses the same library but the library should be build with a flag enabled in one and disabled in other. this is a static library, so at run time there won't be a conflict in runtime. But the library is separate ie, the application is build separately and the library is separate. In each configuration, the library will be build with a different name which is taken care by the makefile. This can be done manually. but now I need to add it to Yocto.
In yocto, how can I build the same library 2 times in separate configuration?
If you're limited to .bbappend and you don't want to duplicate the recipe, you can add some additional tasks then. In these additional tasks (after regular installation) you can do configuration/compilation/installation once again but with any kind of additional actions/variable overrides or whatever. Something like this:
do_special_configure() {
oe_runmake clean
export MAGIC_VARIABLE="magic value"
do_configure
}
do_special_compile() {
export MAGIC_VARIABLE="magic value"
do_compile
}
fakeroot do_special_install() {
export MAGIC_VARIABLE="magic value"
do_install
}
do_special_configure[dirs] = "${B}"
do_special_compile[dirs] = "${B}"
do_special_install[dirs] = "${B}"
addtask special_configure after do_install before do_special_compile
addtask special_compile after do_special_configure before do_special_install
addtask special_install after do_special_compile before do_package do_populate_sysroot
If the different configurations really produce different installed files, then you'll have no problems adding two separate recipes that just happen to have the same SRC_URI
Well, you can't, not without two recipes.
Your two applications, can't influence in any way, how the library is being used. Thus, your options (as long as both these two applications should be available for the same machine / distro combination) basically are:
Create a 2nd recipe (in this case, likely in your layer, though preferably in the upstream layer). If the recipe you're copying uses in .inc and a small .bb that mostly includes that file, you can easily do just the same. Otherwise, your options are to either copy the recipe and modify it, or to have your new recipe
require <PATH_FROM COREBASE-TO-THE-UPSTREAM-RECIPE>/upstream-recipe.bb
If possible, modify the upstream recipe (preferably using a .bbappend) to simultaneously build both versions that you require.

TFS 2015 builds : Is it possible to use Variables in Repository mappings?

When creating a vNext build on TFS 2015 you can define variables, which are then used in build steps, and can also be used as environment variables in scripts the build runs.
The build I am working on runs scripts that pulls files from mapped locations, so it would be great if I could define a variable and use it in a mapping so that for example, if I update a reference in the project the build is building, I can simply update the variable with the new location and have the repository mappings and scripts all pull correctly from the new location without having to make the change in multiple places.
I have tried doing this by setting up the variable and mapping as follows,
But this generates an error when you try to save the build complaining that there are two '$' characters in the mapping. Is there way to do this or is this not currently possible?
This has been causing me havok for quite a while as well.
For starters, there is a uservoice request for this feature. You can add your votes and input here to get Microsoft to allow this feature: https://visualstudio.uservoice.com/forums/330519-team-services/suggestions/14131002-allow-variables-in-repository-variables-and-trigg
Second, we've developed a workaround that gets us most of the way there. It's not perfect, but it might be useful to you if you're comfortable with the tradeoffs or can work around the deficiencies.
Start by turning off the "Label Sources" option of the build and mapping the Server Path field to you base build. You'll want to add a custom variable to the Build Definition to tell the build instance what TFS location to pull from. For example, we have a base project and then multiple branches from the project, so our source is structured like this
$\Team Project\Project1
$\Team Project\Project1Branch1
$\Team Project\Project1Branch2
$\Team Project\Project1Branch3
and we create a variable named "Branch" that we can set to "Branch1", "Branch2", and so forth.
When we want to build the base project, we leave the Branch variable blank when launching the build. For branch builds, we set it to the name of the branch we want to build.
Then our build steps look like this
Remap Workspace Folder to Branch Folder
Get Files for Specified Branch - We have to do this manually after
remapping our workspace
Compile the Source in the Specified Branch
Publish Build Artifacts from the Specified Branch
Label the Code of the Specified Branch Manually
The Remap task runs the command
tf workfold "$/Team Project/Project1$(Branch)" "$(build.sourcesDirectory)\$(Build.DefinitionName)$(Branch)"
The Manual Get task runs the following command
get /recursive /noprompt "$/Team Project/Project1$(Branch)"
The build uses the Branch variable to point to the correct location of the solution file for the specified branch
$(build.sourcesDirectory)\$(Build.DefinitionName)$(Branch)\SolutionFile.sln
The Publish Artifacts task uses the Branch variable in both the Contents field and the Path field
Example in Contents
**\$(Build.DefinitionName)$(Branch)\bin
The Label Code task uses the following command
tf label "$(build.buildNumber)" "$/Team Project/Project1$(Branch)" /recursive
The downside of this setup is that you don't capture Associated Changes and Work Items to your subsidiary branches as the Server Path field is always set to the main location. This may not be an issue if you always merge from your branches to your main location prior to launching a build meant to go to production. What you can do to compensate for this really depends on your use case.
With some tweaking, you could use this same format to specify full paths as well if you needed to.
It's impossible. Just as the error message mentioned: there are two '$' characters in the mapping. Which means your application's path shouldn't vary from build to build.
Mappings on the Repository page are used to specify source control
folder which contains projects that need to be built in the build
definition. You can set it via clicking the Ellipsis (...) button,
however, you can't include variables in the mapping path.
There is a similar question: Variables in TFS Mappings on Visual Studio Online Team Builds

OpenLayers 3 Build from master

I've cloned the OpenLayers 3 repo and merged the latest from master. There exists a recently merged pull request that I'm interested in exploring, but I'm not sure how to create a regular old comprehensive, non-minified build.
Does anyone know how to create a non-minified, kitchen sink (everything included) build for OpenLayers?
(similar to ol-debug.js).
You can use the ol-debug.json config to concatenate all sources for the library without any minification.
node tasks/build.js config/ol-debug.json ol-debug.js
Where the ol-debug.json looks like this:
{
"exports": ["*"],
"umd": true
}
The build.js task generates builds of the library given a JSON config files. The custom build tutorial describes how this can be used to create minified profiles of the library. For a debug build, you can simply omit the compile member of the build config. This is described in the task readme:
If the compile object is not provided, the build task will generate a "debug" build of the library without any variable naming or other minification. This is suitable for development or debugging purposes, but should not be used in production.