I work on a project where I am the only front-end developer amongst a team of C++ developers. When we build our release variant, I want the C++ developers to run the web build process (npm install, grunt/gulp build which does concat/minification/etc...). In order for that to happen, they have to npm install all the devDependencies.
Is there a way to allow them to quickly install the necessary npm modules without having to re-download them ever time npm install is called? Or make the npm install only go through installation once?
npm link doesn't work since that links to the web application and not the node modules that the web application depends upon.
tar.gz would be possible but that means updating the tar.gz every time a node module gets updated.
Curious what development process others suggest for tacking working in a mixed language environment.
You can checkout node_modules to your git or whatever version control you're using, so they won't be downloaded every time.
Yes, someone will have to update modules once in a while, but some people (including npm itself) do just that.
You can also put a caching proxy server (i.e. sinopia) to download packages from, so downloading would be a bit faster.
Related
How do you structure a NodeJS package, containing wrapped C++ code that's compiled into web assembly, so that when you run npm install <package name>, the compilation step happens?
I have a package mypackage configured so that when I run npm run build in its project directory, C++ code is compiled into web assembly, and this is then bundled with other Javascript for the package.
I'm now trying to use this package from another project, and if I run npm install --save mypackage, it installs the package's Javascript, but doesn't run it's build process, so none of the web assembly is created, resulting in a broken package.
How you do it
In the scriptis section in package.json file, you can add a postinstall script, while will be run each time after a package is installed. You can find more about npm scripts here https://docs.npmjs.com/misc/scripts
inside pacakge.json
...
"scripts": {
...
"postinstall": "npm run build"
}
Should you do it
The only valid reason to make build process happen at the target (consumer) system is if the build is dependent on the operating system or architecture of the target system, or is dependent on some configuration/properties of the target system. If the build is not related then it should be made at publishing time, this way it will be done once for all consumers, saving bandwidth, and time. usually also you save space because the bundled package is less than the source.
If you decide to package the built artifact (your bundled package) then its a good idea to use the prepublish script
"scripts": {
...
// this will make sure that you are always publishing the most updated built artifact, instead of having to manually run build each time you want to publish
"prepublish": "npm run build"
}
Another good idea, is to also exclude the source files from the published package using .npmignore, to actually save space.
Can I use apt-get or other package managers in Cloud Foundry buildpacks or .profile scripts that come with apps; and if I can, how to do it? I expect to do it the same way as in a dockerfile, but it doesn't work with or without sudo in my case.
Can I use apt-get or other package managers in Cloud Foundry buildpacks or .profile scripts that come with apps; and if I can, how to do it?
No. Running apt-get or a package manager would typically require root access and you do not get root access when the build pack runs or when your application runs (this is a difference w/Docker).
That said, you can do anything that doesn't require root access, so if you found a package manager that installed in the vcap user's home directory and didn't need root then you could use that.
It depends on what you're trying to install, but in some cases you can work around this by downloading the .deb or .rpm file and manually extracting the binaries. This typically works OK for things like shared libraries. Just download the precompiled binary that matches your stack (cflinuxfs2 == Ubuntu Trusty). For other things, you can build your own binaries from source. This is what the build pack's do, see binary-builder.
Hope that helps!
We're using the hosted build agent on VSTS to build and release our ASP.NET Core code to Azure App service.
My question is: can we run WebPack to handle front-end tasks on this hosted build on VSTS or do we have to do it manually before checking the code into our repository?
Update:
I'm utilizing the new ASP.NET Core Build (Preview) template that's available on VSTS -- see below:
Here are the steps -- out of the box:
For VSTS we're working on an extension, currently it's in beta phase, you can ask for a share.
Check the VSTS marketplace.
Check this github repo.
Webpack is definitively not a first class citizen for VS2015 and VSTS. Streamlining webpack for CI/CD has been a real headache in my case, especially as webpack was introduced hastily to solve dreadful performance issues with a large monolithic SPA (ASP.NET 4.6, Kendo, 15,000 files, 2000 folders). To cut short, after trying many scenarios to make sure that freshly rebuilt bundles would end up in IIS and Azure webapp, I did a 2-pass build. The sequence of VSTS tasks is as follows: npm install global, npm install local, npm webpack install local, npm webpack install global, build pass 1, webpack, build pass 2, etc... This works with hosted and private agents, providing you supply the proper path for webpack as webpack is installed in a different location in host and in private (did not find a way to chose the webpack install location for consistency). I scorch everything before starting the build. Also need to do these in VS2015 solution : (1) unload "built" folder, and (2) Add Content Include="Built\StarStar" in project file. The "built" folder contains the bundles and should appear greyed, otherwise more bad surprises and instabilities to deal with...
Build-Pass #2 task in VSTS BUILD allows to collect the fresh bundles generated by Build-Pass #1 and includes them automatically in the package to be published.
Without a second build-pass, collecting the bundles and merging them in the zip package is a nightmare, especially when you have 15,000 files to unzip then rezip (300 ms per file!!). Did not find file-merging capability that I could readily use in VSTS.
I have my hears to the ground listening for someone coming up with a more efficient CI/CD scheme for webpack. In the meanwhile, my 2-pass-build workaround is working flawlessly, but slow indeed.
I anticipate that the advances with ASP.NET core, Angular 2 and webpack will look into solving this elegantly.
I have been just starting out with ember addon and one of the difficulty I am facing is to debug it. I have a separate repo for my addon(lets name it my-addon for now), and everytime I make any change, I have to
1) commit it
2) push the changes
3) go to consuming app and then re install the app from git(atleast re-run npm install git:address so I get the latest changes)
4) run ember g my-addon (because I am in older cli)
5) do build
6) and check if things are working
This process is kinda tedious, I was wondering if I can place the addon(all of it) within the consuming app itself, atleast in the dev phase so I can just build my ember app and test the addon in the consuming app itself, and once I feel good about, push it to my local git repo.
Any thoughts or approach on how you folks do it - or may be I am just missing out something and doing it wrong!
Thanks,
Dee
If you use ember-cli you can link your local addon in the consuming app. You can find all details in the user guide
Note that watchman doesn't observe local addon symlinked (there are couple of issues opened both on ember-cli and watchman). I've resolved removing watchman falling back to NodeWatcher (I'm on mac)
I am pretty sure the solution provided by #GUL must work too, but what worked for me was:
1) in the consuming dev app, I created a folder called addons and placed all my addon code there
2) in consuming dev app, in package.json I added :
"ember-addon": {
"paths": [
"addons/ember-chart"
]
}
and that worked for me!
The top answer is best here. I just wanted to offer an alternative that is useful in certain situations. npm pack at root of in development addon. Then cd back to parent project. npm install ../ember-composable-helpers-2.2.0.tgz. And then check if things are working.
npm pack will create a tarball as if published on npm.
I have to deploy a Django application onto a SuSE Linux Enterprise 11 system. Corporate rules say I need to deploy using RPMs only. While I can use ./setup.py bdist_rpm for each dependency, it's not really sane, since RPM doesn't record all of the dependencies yet. Therefore I'd have no real advantage in using RPMs and managing dependencies manually is somewhat cumbersome and I would like to avoid it.
Now I had the following idea: While building a package, I could create a virtualenv, install all my dependencies via pip there and then package it up with the rest of the code into one solid RPM.
How sensible is this approach?
I've been using this approach for about a year now and it has worked out pretty well.
One gotcha is that you'll want to check out the bang lines in any python scripts written to the virtualenv's bin directory. These will end up being full path names used in your build environment, which probably won't be the same directory where you end up installing the virtualenv. So you may need to add some sed calls in your RPM's postinstall to adjust the paths.