I am currently trying to build a project with source in a git repository and some dependencies in an artifactory. I need to first download all the sources and binaries from the repo and artifactory to my local workspace.
I could not find any information regarding artifactory integration with bazel. I can see that this feature has been requested https://www.jfrog.com/jira/browse/RTFACT-15428?jql=labels%20%3D%20bazel.
Is anyone aware of any build tools that can first download resources and then build them?
I need both git and artifactory support.
According to the Bazel documentation for Java, you can define external dependencies resolved to Maven with rule maven_jar.
As Artifactory supports Maven, you can set up your dependencies in a Maven repository, and retrieve artifacts from there with your Bazel build script.
On the other side of the build, publication seems to be a work-in-progress and on the roadmap for Bazel builds.
You can also attempt to write the artifactory rules in Skylark: https://docs.bazel.build/versions/master/skylark/repository_rules.html
Remote build cache
Bazel supports any HTTP 1.1 server with PUT and GET methods as http cache. Simple HTTP Auth is also supported. This means using Artifactory as a remote build cache is straightforward.
Create a new Generic repository in Artifactory.
Now run bazel as
bazel test \
--remote_http_cache=https://user:password#[...].com:8081/artifactory/bazel/ \
test //...
See https://docs.bazel.build/versions/master/remote-caching.html for the relevant Bazel doc.
Related
I have a Google Source Repository that mirrors a private Github repo of mine that contains code for a Python package. I also have a Cloud Run instance where I'd like to install this private Python library. How can I go about doing that in the cloudbuild.yaml or Dockerfile?
Cloud Source repository is a Git repository. So, if you want to get sources (your private library) from there, simply clone the repo (the head, or a specific tag/commit sha).
You can't use pip install command because you need a private pipy server installed to achieve this. If you have this, get the credentials and request it!
That's for the principle. Then how to achieve this in Cloud Build or in Docker build.
IMO, the easiest is in the Cloud Build pipeline. Load a GIT compliant image (for instance gcr.io/cloud-builders/git) and close your Cloud Source repo. I recommend this step because you can use the Cloud Build authentication context to log into Cloud Source repository.
Then, in your Dockerfile, copy all your environment (your code and the venv that contains the download library). The additional public libraries can be download inside the Dockerfile, or in the Cloud Build, as you wish.
I have upgraded nexus repository from 2.x to 3.x through following path:
2.4.14 -> 3.4.0 -> 3.5.1
All nexus services were packed in docker with data directory mapped from host's. For all services I use default either sonatype/nexus or sonatype/nexus3 containers. Nexus web interface is hidden behind nginx with simple reverse proxying.
I use the nexus service with boot-cj (with no credentials) tools which manages dependencies the same way as maven. Anyway the tool first downloads nexus-maven.xml with relevant sha1 files and tries to download jars. It works fine with all 2.x I had.
I created a proxy repository against remote sonatype-snapshots repo. When I start compilation I have Could not find artifact error. I found that the meatdata files are cached but all poms and jars.
I have tried to fix it by cleaning cache with the clean_cache file trick and more rough rm -rfv /srv/nexus3/nexus-data/cache/* with no success. There are no any logs about error. Also I have checked manually that required artefact exists in the remote repository. More obvious Rebuild index button gave no solution. I do not thing it is a problem with nginx, but who knows? Also leaving overnight to run the scheduled tasks did not help.
The expected artifact is org.eclipse.rdf4j:rdf4j:pom:2.3-20170901.145510-11.
We're using the hosted build agent on VSTS to build and release our ASP.NET Core code to Azure App service.
My question is: can we run WebPack to handle front-end tasks on this hosted build on VSTS or do we have to do it manually before checking the code into our repository?
Update:
I'm utilizing the new ASP.NET Core Build (Preview) template that's available on VSTS -- see below:
Here are the steps -- out of the box:
For VSTS we're working on an extension, currently it's in beta phase, you can ask for a share.
Check the VSTS marketplace.
Check this github repo.
Webpack is definitively not a first class citizen for VS2015 and VSTS. Streamlining webpack for CI/CD has been a real headache in my case, especially as webpack was introduced hastily to solve dreadful performance issues with a large monolithic SPA (ASP.NET 4.6, Kendo, 15,000 files, 2000 folders). To cut short, after trying many scenarios to make sure that freshly rebuilt bundles would end up in IIS and Azure webapp, I did a 2-pass build. The sequence of VSTS tasks is as follows: npm install global, npm install local, npm webpack install local, npm webpack install global, build pass 1, webpack, build pass 2, etc... This works with hosted and private agents, providing you supply the proper path for webpack as webpack is installed in a different location in host and in private (did not find a way to chose the webpack install location for consistency). I scorch everything before starting the build. Also need to do these in VS2015 solution : (1) unload "built" folder, and (2) Add Content Include="Built\StarStar" in project file. The "built" folder contains the bundles and should appear greyed, otherwise more bad surprises and instabilities to deal with...
Build-Pass #2 task in VSTS BUILD allows to collect the fresh bundles generated by Build-Pass #1 and includes them automatically in the package to be published.
Without a second build-pass, collecting the bundles and merging them in the zip package is a nightmare, especially when you have 15,000 files to unzip then rezip (300 ms per file!!). Did not find file-merging capability that I could readily use in VSTS.
I have my hears to the ground listening for someone coming up with a more efficient CI/CD scheme for webpack. In the meanwhile, my 2-pass-build workaround is working flawlessly, but slow indeed.
I anticipate that the advances with ASP.NET core, Angular 2 and webpack will look into solving this elegantly.
So far I installed deployment version of wso2 AM. Now I would like to build it from source and try running it instead of the binaries I downloaded from the site.
Based on WSO2 documentation, I understand the steps are:
1) Download the carbon kernel source:
git clone -b 4.4.x https://github.com/wso2/carbon-kernel.git
2) Download the APIM source:
git clone https://github.com/wso2/product-apim
3) Build APIM from source
cd <SOURCE-DIR>\product-apim
mvn clean install
Are these steps sufficient, or am I missing something?
Should I build carbon-kernel in addition to building apim-manager?
On previous stackoverflow question, I read that carbon-kernel is not really necessary, and instead i should download and build carbon-apimgmt. Is this correct?
After I build the sources, how do I "package" all the compiled binaries along with all other necessary artifacts, in order to form an equivalent package to the wso2am-1.10.0.zip which I download from the site? Or is there another way to install and run the built code?
Github projects related to API manger can be found in following locations
apimgt component repo:
https://github.com/wso2/carbon-apimgt
This repository contains org.wso2.carbon.apimgt component related source code.
product repo:
https://github.com/wso2/product-apim
This repository contains all the resources needed to build the product package and intergration tests for the product.
master branch of these repositories are used for current development. (if you open parent pom.xml file you would find SNAPSHOT versions). If you build the default branches you would build the current development version of the api manager. (at this time, 1.10.1-SNAPSHOT). To build already released product you need to build released tag.
Steps to Build API manager 1.10.0
clone product:
git clone https://github.com/wso2/product-apim
Checkout release tag v1.10.0:
git checkout v1.10.0
Build the product:
mvn clean install (or mvn clean install -Dmaven.test.skip=true to skip integration tests)
get the product from
product-apim\modules\distribution\product\target
You do not have to build the 'carbon-apimgt' repository because the component build using that is already released and can be found in the nexus repo.
If you want to build the component (say need to provide a fix for a bug) build the 'v5.0.3' tag from the 'carbon-apimgt' repo.
git clone https://github.com/wso2/product-apim
git checkout v5.0.3
I'm posting the steps I did:
git clone https://github.com/wso2/carbon-appmgt
git clone https://github.com/wso2/product-apim
cd <SRC>/carbon-appmgt
mvn clean install
cd <SRC>/product-apim
mvn clean install
The ZIP file was found in
<SRC>\product-apim\modules\distribution\product\target
It is similar to the ZIP file that you download from the site.
I'm developing some applications in Clojure + Java using Eclipse and Maven with m2eclipse.
This works great when my dependencies are in a Maven repository. However there are some libraries that I would like to use that aren't in any repository - e.g. small open source Clojure libraries hosted on GitHub.
These projects typically have a build.xml or a project.clj but not a pom.xml.
Is there a any way to set up Maven to handle these dependencies automatically? Or do I need to manually download and build all of these?
Unfortunately no, you'll either have to:
find a repository containing those libraries
manually add these to your repository using mvn install (and if you're kind enough, ask for those to be published in the central maven repo)
ask the developers if they would be so kind to propose a mavenized version and publish it in some maven repository
Clojure libraries often provide their artifacts in clojars, you might solve your issues just by adding it as a repository in your pom.xml.
Another option available when integrating leiningen and maven builds is to automatically generate a POM out of a project.clj via lein pom
This would allow to include the libraries in your build as long as you checked them out locally.