Nexus 3.5.1 proxies from snapshot repo nothing but maven metadata files - clojure

I have upgraded nexus repository from 2.x to 3.x through following path:
2.4.14 -> 3.4.0 -> 3.5.1
All nexus services were packed in docker with data directory mapped from host's. For all services I use default either sonatype/nexus or sonatype/nexus3 containers. Nexus web interface is hidden behind nginx with simple reverse proxying.
I use the nexus service with boot-cj (with no credentials) tools which manages dependencies the same way as maven. Anyway the tool first downloads nexus-maven.xml with relevant sha1 files and tries to download jars. It works fine with all 2.x I had.
I created a proxy repository against remote sonatype-snapshots repo. When I start compilation I have Could not find artifact error. I found that the meatdata files are cached but all poms and jars.
I have tried to fix it by cleaning cache with the clean_cache file trick and more rough rm -rfv /srv/nexus3/nexus-data/cache/* with no success. There are no any logs about error. Also I have checked manually that required artefact exists in the remote repository. More obvious Rebuild index button gave no solution. I do not thing it is a problem with nginx, but who knows? Also leaving overnight to run the scheduled tasks did not help.
The expected artifact is org.eclipse.rdf4j:rdf4j:pom:2.3-20170901.145510-11.

Related

npm install on Elastic Beanstalk omitting folders

This only started happening tonight, and even after reverting my totally not-npm related changes it's still happening.
I've got an AWS Elastic Beanstalk setup here where I'm calling eb deploy to deploy a KeystoneJS cms application. As part of the deployment it runs npm install, and I've got a custom fork/branch of the keystone github repo that it's supposed to install. And it does! But for some inexplicable reason /lib/core/ in the Keystone repo is just... not there. I get errors complaining about those missing files, and sure enough the entire folder is not present. They are just not npm installed, despite the rest of the Keystone repo being installed just fine.
I can't reproduce this locally. I'll run npm install, it adds that folder. I'll do npm install <my-fork>, it adds the folder. Every combination locally works just fine, and every deployment I've done to EBS in the PAST has worked just fine. Only tonight has this folder stopped showing up in my installations.
Is it a problem with Elastic Beanstalk? Is it a problem with npm? I've made sure to sync my local npm version (6.8.0) with the EB one, no difference. I've checked to make sure I don't have any .ebignore or .npmignore or .gitignore that might somehow be blocking the core folder, nothing. Unless there's one secretly controlling the temp folder that gets first installed to? I don't know why this would suddenly be an issue though, when it wasn't a couple weeks ago.
Anyone experienced anything like this?
[Edit] For some additional details, changing the keystone version in my package.json to just keystone: "4.0.0" gets me those core files fine. If I install directly from the associated keystone repo, keystone: "keystonejs/keystone", they aren't there. This is again just on the eb install tho, the core files show up for both if I do them locally. But on eb when I install from a git url, which I need to for my specific fork/branch, I see this issue.
Well, I figured it out!
https://npm.community/t/npm-pack-leaving-out-files-6-8-0-only/5382
Someone broke npm 6.8.0. Let my tale be a cautionary one, don't have your deployment scripts set to auto-update npm to the latest version.

Blender on IBM Cloud (Cloud Foundry)

I'm currently developing a web application (Django 2.0) application.
My app will be deployed on IBM Cloud (Cloud Foundry) using python build-pack.
One of my requirements is to install blender.
Everything else is very well, but for blender installation.
What I've tried so far was:
I tried access my app using SSH connection, but surely I don't have root access to apt-get install blender!!
And tried to include blender in packages.json file and push that file using cf push my-app.
But nothing worked for me.
In another shorter question: what is the main approach in Cloud Foundry Apps to install packages like when we use apt-get install in Ubuntu / Debian.
Please correct me if I did anything wrong, or guide me with headlines to solve this problem!!
I see a couple options for you to install packages if they cannot be installed using the regular requirements file (which is the preferred way):
Download the relevant libraries and put them in subfolders of the app before pushing it. The libraries will be uploaded. That is how I would do it.
Once you have an SSH connection, use secure copy (scp) to upload the files and place them in the subfolders where they are expected.
Regarding Blender, the question is what you need in addition to having the code copied over. Does it need a running daemon? Are there more dependencies? You would need to share more information about your specific app to answer that. Maybe, packaging everything as one or more containers and run it on Kubernetes or a combination of Cloud Foundry and Kubernetes is a better way.

Webpack: Should I build bundle on production server or build it locally and then upload?

I am deploying a React app on AWS Elastic Beanstalk. I bundle the app using webpack. However, I'm slightly confused about what best practices are from the production build process. Should I build the app locally (with NODE_ENV=production) using webpack, and then just upload the resultant bundle.js file, along with all node_modules to the Elasticbeanstalk instance? Or, should I upload all the source files, and run webpack on the actual cloud AWS server during deployment?
You should never build for production locally (unless you're the only developer).
Ideally, you have a build process that gets triggered manually or automatically from a git commit which then builds your project for production for you.
By using a centralized build process, you can then be sure that all your builds are built the same way (e.g. same node version, same npm or yarn version).
Both approaches are not really good to be honest. Local building is not a best way to build anything you want to have on production. You might have packages locally that may have inpact on what you're building. Same applies to the OS your doing it on.
And, again, same applies to the building during deployment. As the name of 'deployments' stands for, it's deploying. Just placing your application setup on the server so it may serve as it is supposed to.
That's the point where all CI/CD comes in. Having those kinds of solutions guarantee that each build is done with the same steps and on the same solution stack. No difference between each build is desired, because it allows you to assume that any bug or a change comparing to the 'desing' is because of the code, not environment it was build within.
Assuming that you're the only developer here (because you're asking for such a thing), CI/CD might be definitive overkill here, so just create shell script with steps and use Docker as the environment for build, so it stays the same between each build. That's the closest to the CI/CD option you can get without a hassle.

WebPack on VSTS Hosted Build

We're using the hosted build agent on VSTS to build and release our ASP.NET Core code to Azure App service.
My question is: can we run WebPack to handle front-end tasks on this hosted build on VSTS or do we have to do it manually before checking the code into our repository?
Update:
I'm utilizing the new ASP.NET Core Build (Preview) template that's available on VSTS -- see below:
Here are the steps -- out of the box:
For VSTS we're working on an extension, currently it's in beta phase, you can ask for a share.
Check the VSTS marketplace.
Check this github repo.
Webpack is definitively not a first class citizen for VS2015 and VSTS. Streamlining webpack for CI/CD has been a real headache in my case, especially as webpack was introduced hastily to solve dreadful performance issues with a large monolithic SPA (ASP.NET 4.6, Kendo, 15,000 files, 2000 folders). To cut short, after trying many scenarios to make sure that freshly rebuilt bundles would end up in IIS and Azure webapp, I did a 2-pass build. The sequence of VSTS tasks is as follows: npm install global, npm install local, npm webpack install local, npm webpack install global, build pass 1, webpack, build pass 2, etc... This works with hosted and private agents, providing you supply the proper path for webpack as webpack is installed in a different location in host and in private (did not find a way to chose the webpack install location for consistency). I scorch everything before starting the build. Also need to do these in VS2015 solution : (1) unload "built" folder, and (2) Add Content Include="Built\StarStar" in project file. The "built" folder contains the bundles and should appear greyed, otherwise more bad surprises and instabilities to deal with...
Build-Pass #2 task in VSTS BUILD allows to collect the fresh bundles generated by Build-Pass #1 and includes them automatically in the package to be published.
Without a second build-pass, collecting the bundles and merging them in the zip package is a nightmare, especially when you have 15,000 files to unzip then rezip (300 ms per file!!). Did not find file-merging capability that I could readily use in VSTS.
I have my hears to the ground listening for someone coming up with a more efficient CI/CD scheme for webpack. In the meanwhile, my 2-pass-build workaround is working flawlessly, but slow indeed.
I anticipate that the advances with ASP.NET core, Angular 2 and webpack will look into solving this elegantly.

How can I run gradle wrapper behind a firewall / using a proxy maven server?

I have been trying to get Gradle working on our Continuous Integration server, which has no access to internet (external) URLs.
Currently, we get our maven-style dependencies from an internal proxy server. So I uploaded the gradle wrapper onto that server too, such that when the CI server starts up it can download the wrapper from the internal maven proxy server.
Problem solved, I thought; the build will carry on and pull down the project dependencies from the internal proxy server as well (it's set up in the build script) and should be OK now.
But in between getting the wrapper Zip file and starting the build, it's doing the following:
Downloading http://maven.internal.mycompany.com:8081/nexus/content/repositories/thirdparty/org/gradle/gradle/1.0-milestone-3/gradle-1.0-milestone-3-bin.zip ................
Unzipping /home/user/.gradle/wrapper/dists/gradle-1.0-milestone-3-bin.zip to /home/user/.gradle/wrapper/dists
Set executable permissions for: /home/user/.gradle/wrapper/dists/gradle-1.0-milestone-3/bin/gradle
Download http://repo1.maven.org/maven2/org/codehaus/groovy/groovy/1.7.3/groovy-1.7.3.pom
Download http://repo1.maven.org/maven2/antlr/antlr/2.7.7/antlr-2.7.7.pom
etc...
*** then the actual build starts ***
Download http://maven.internal.mycompany.com:8081/nexus/content/groups/public/commons-lang/commons-lang/2.6/commons-lang-2.6.jar
E.g. it's trying to pull down extra dependencies for the gradle executable from repo1.maven.org which fails on the continous integration server, as it has no access to this server.
In my build.gradle file I have:
repositories {
mavenRepo urls: "http://maven.internal.mycompany.com:8081/nexus/content/groups/public"
}
and in my ./gradle/wrapper/gradle-wrapper.properties file I have :
distributionUrl=http\://maven.internal.mycompany.com:8081/nexus/content/repositories/thirdparty/org/gradle/gradle/1.0-milestone-3/gradle-1.0-milestone-3-bin.zip
So is there another place I can specify which server the wrapper should use to get it's additional dependencies ? Or is this hard-coded into the wrapper itself ? Or I might be missing a trick here, as Google doesn't seem to show up anyone else having this issue at all !
Ben
Picked up a hint from another forum that led me to the answer - a plugin for cobertura that I was pulling down had it's own gradle build file that included the default maven repositories.
I've removed that now, and the calls to external maven have ceased.